Most proper way to throw exception as validation for reactive stream - reactive-programming

I have a reactive stream that I would like one step of which to apply a validation check, that if failed, will throw an exception. Is there a commonly accepted style to do that with? From what I can tell I have three options (using Mono) in then(), filter(), and map().
filter() is closest to the flow I want, in that I'm not actually trying to change the type of data in the stream or switch to another stream. But, filter is supposed to return true/false to filter out items, so it's a little goofy to always return TRUE.
then() lets me specifically choose error/success emissions, but sometimes for this type of validation I am not able to easily split it off into it's own private method and the boilerplate makes the stream declaration messier to read.
map() is pretty much the same as using filter(), except you always return in the input instead of TRUE.
As a very contrived example, consider a service that has a list of 0 or more letters to send to a person:
public interface Person {
UUID getId();
List<String> getKnownLanguages();
}
public interface Letter {
String getLanguage();
}
public class LetterService {
private Letter findOneLetterForPerson(final UUID id) { /* ... */ }
private void removeLetter(final Letter letter) { /* ... */ }
}
What is the better option for creating a method that looks like this:
public Mono<Optional<Letter>> getNextValidLetterForPerson(final Person person) {
return Mono.just(person)
.and(this::getNextLetterForPerson)
/////////////////////////////////////////
//
.filter(this::validatePersonCanReadLetter1)
.map(Tuple2::getT2)
//
// OR
//
.then(this::validatePersonCanReadLetter2)
//
// OR
//
.map(this::validatePersonCanReadLetter3)
//
/////////////////////////////////////////
// If the letter was invalid for the person, remove the letter from the
// the system as a side effect, and retry retrieving a letter to send
.doOnError(this::removeInvalidLetter)
.retry(this::ifLetterValidationFailed)
// Map the result to an appropriate Optional
.map(Optional::of)
.defaultIfEmpty(Optional.empty());
}
The supporting methods used in the example above are:
public static class LetterInvalidException extends RuntimeException {
private Letter mLetter;
public LetterInvalidException(final Letter letter) { mLetter = letter; }
public Letter getLetter() { return mLetter; }
}
/** Gets the next letter for a person, as a reactive stream */
private Mono<Letter> getNextLetterForPerson(final Person person) {
return Mono.create(emitter -> {
final Letter letter = mLetterService.findOneLetterForPerson(person.getId());
if (letter != null) {
emitter.success(letter);
}
else {
emitter.success();
}
});
}
/** Used to check whether the cause of an error was due to an invalid letter */
private boolean ifLetterValidationFailed(final Throwable e) {
return e instanceof LetterInvalidException;
}
/** Used to remove an invalid letter from the system */
private void removeInvalidLetter(final Throwable e) {
if (ifLetterValidationFailed(e)) {
mLetterService.removeLetter(((LetterInvalidException)e).getLetter());
}
}
/*************************************************************************
*
*************************************************************************/
private boolean validatePersonCanReadLetter1(final Tuple2<Person, Letter> tuple) {
final Person person = tuple.getT1();
final Letter letter = tuple.getT2();
if (!person.getKnownLanguages().contains(letter.getLanguage())) {
throw new LetterInvalidException(letter);
}
return true;
}
private Mono<Letter> validatePersonCanReadLetter2(final Tuple2<Person, Letter> tuple) {
return Mono.create(emitter -> {
final Person person = tuple.getT1();
final Letter letter = tuple.getT2();
if (!person.getKnownLanguages().contains(letter.getLanguage())) {
emitter.error(new LetterInvalidException(letter));
}
else {
emitter.success(letter);
}
});
}
private Letter validatePersonCanReadLetter3(final Tuple2<Person, Letter> tuple) {
final Person person = tuple.getT1();
final Letter letter = tuple.getT2();
if (!person.getKnownLanguages().contains(letter.getLanguage())) {
throw new LetterInvalidException(letter);
}
return letter;
}
Ideally I would loved a method such as Mono<T> validate(..) that would allow testing the stream item and either returning or throwing an exception (if returned, the framework would treat that as an error), but I'm rather new to reactive programming and didn't see anything that worked like that.

Maybe handle is a better solution it can serve as a combination of map and filter:
Mono.just(p).and(test::getNextLetterForPerson).handle((tuple, sink) -> {
final Person person = tuple.getT1();
final Letter letter = tuple.getT2();
if (!person.getKnownLanguages().contains(letter.getLanguage())) {
sink.error(new LetterInvalidException(letter));
return;
}
sink.next(letter);
}).subscribe(value -> System.out.println(((Letter) value).getLanguage()),
t -> System.out.println(t.getMessage()));
As you can see it's almost like your validatePersonCanReadLetter3

Related

Can Kaitai Struct be used to describe TLV data without creating new types for each field?

I'm reverse engineering a file format that stores each field as TLV blocks (type, length, value).
The fields do not have to be in order, or even present at all. Their presence is denoted with a sentinel, which is a 16-bit type identifier and a 32-bit end offset. There are hundreds of unique identifiers, but a decent chunk of those are just single primitive values. aside from denoting the type, they can also identify what field the data should be stored in.
It is also worth noting that there will never be a duplicate id on a parent structure. The only time is can occur is if there are multiple of the same object type in an array/list.
I have successfully written a Kaitai definition for one of them:
meta:
id: struct_02ea
endian: le
seq:
- id: unk_00
type: s4
- id: fields
type: field_block
repeat: eos
types:
sentinel:
seq:
- id: id
type: u2
- id: end_offset
type: u4
field_block:
seq:
- id: sentinel
type: sentinel
- id: value
type:
switch-on: sentinel.id
cases:
0xF0: u1
0xF1: u1
0xF2: u1
0xF3: u1
0xF4: u4
0xF5: u4
size: sentinel.end_offset - _root._io.pos
Handling things this way does work, and I could likely map out the entire format like this. However, when it comes time to compiling this definition into another format, things get nasty.
Since I am wrapping each field in a field_block, the generated code stores these values in that type of object. This is incredibly inefficient when half of the generated field_block objects store a single integer. It would also require the consuming code to iterate through a list of each field block in order to get the actual field's value.
Ideally, I would like to define this structure so that the sentinels are only parsed while Kaitai is reading the data, and each value would be mapped to a field on the parent structure.
Is this possible? This technology is really cool, and I'd love to use it in my project, but I feel like the overhead that this is generating is a lot more trouble than it's worth.
Here's an example of the definition when compiled into C#:
using System.Collections.Generic;
namespace Kaitai
{
public partial class Struct02ea : KaitaiStruct
{
public static Struct02ea FromFile(string fileName)
{
return new Struct02ea(new KaitaiStream(fileName));
}
public Struct02ea(KaitaiStream p__io, KaitaiStruct p__parent = null, Struct02ea p__root = null) : base(p__io)
{
m_parent = p__parent;
m_root = p__root ?? this;
_read();
}
private void _read()
{
_unk00 = m_io.ReadS4le();
_fields = new List<FieldBlock>();
{
var i = 0;
while (!m_io.IsEof) {
_fields.Add(new FieldBlock(m_io, this, m_root));
i++;
}
}
}
public partial class Sentinel : KaitaiStruct
{
public static Sentinel FromFile(string fileName)
{
return new Sentinel(new KaitaiStream(fileName));
}
public Sentinel(KaitaiStream p__io, Struct02ea.FieldBlock p__parent = null, Struct02ea p__root = null) : base(p__io)
{
m_parent = p__parent;
m_root = p__root;
_read();
}
private void _read()
{
_id = m_io.ReadU2le();
_endOffset = m_io.ReadU4le();
}
private ushort _id;
private uint _endOffset;
private Struct02ea m_root;
private Struct02ea.FieldBlock m_parent;
public ushort Id { get { return _id; } }
public uint EndOffset { get { return _endOffset; } }
public Struct02ea M_Root { get { return m_root; } }
public Struct02ea.FieldBlock M_Parent { get { return m_parent; } }
}
public partial class FieldBlock : KaitaiStruct
{
public static FieldBlock FromFile(string fileName)
{
return new FieldBlock(new KaitaiStream(fileName));
}
public FieldBlock(KaitaiStream p__io, Struct02ea p__parent = null, Struct02ea p__root = null) : base(p__io)
{
m_parent = p__parent;
m_root = p__root;
_read();
}
private void _read()
{
_sentinel = new Sentinel(m_io, this, m_root);
switch (Sentinel.Id) {
case 243: {
_value = m_io.ReadU1();
break;
}
case 244: {
_value = m_io.ReadU4le();
break;
}
case 245: {
_value = m_io.ReadU4le();
break;
}
case 241: {
_value = m_io.ReadU1();
break;
}
case 240: {
_value = m_io.ReadU1();
break;
}
case 242: {
_value = m_io.ReadU1();
break;
}
default: {
_value = m_io.ReadBytes((Sentinel.EndOffset - M_Root.M_Io.Pos));
break;
}
}
}
private Sentinel _sentinel;
private object _value;
private Struct02ea m_root;
private Struct02ea m_parent;
public Sentinel Sentinel { get { return _sentinel; } }
public object Value { get { return _value; } }
public Struct02ea M_Root { get { return m_root; } }
public Struct02ea M_Parent { get { return m_parent; } }
}
private int _unk00;
private List<FieldBlock> _fields;
private Struct02ea m_root;
private KaitaiStruct m_parent;
public int Unk00 { get { return _unk00; } }
public List<FieldBlock> Fields { get { return _fields; } }
public Struct02ea M_Root { get { return m_root; } }
public KaitaiStruct M_Parent { get { return m_parent; } }
}
}
Affiliate disclaimer: I'm a Kaitai Struct maintainer (see my GitHub profile).
Since I am wrapping each field in a field_block, the generated code stores these values in that type of object. This is incredibly inefficient when half of the generated field_block objects store a single integer. It would also require the consuming code to iterate through a list of each field block in order to get the actual field's value.
I think that rather than trying to describe the entire format with an ultimate Kaitai Struct specification, it's better for you not to let the generated code parse all the fields automatically. Move the parsing control to your application code, where you use the type Struct02ea.FieldBlock that represents the individual field and basically replicate the "repeat until end of stream" loop that the generated code that you posted was doing:
_fields = new List<FieldBlock>();
{
var i = 0;
while (!m_io.IsEof) {
_fields.Add(new FieldBlock(m_io, this, m_root));
i++;
}
}
The advantage of doing so is that you can adjust the loop to fit your needs. To avoid the overhead you describe, you'll probably want to keep the Struct02ea.FieldBlock object in a local variable inside the loop body, pull only the values you care about (save them in your compact, consumer-friendly output structures) and let it leave the scope after the loop iteration ends. This will allow each original FieldBlock object to get garbage-collected once you process it, so the overhead they have will be limited to a single instance and not multiplied by the number of fields in the file.
The most straightforward and seamless way to prevent the Kaitai Struct-generated code parse fields (but otherwise keep everything the same) is to add if: false in the KSY specification, as #webbnh suggested in a GitHub issue:
seq:
- id: unk_00
type: s4
- id: fields
type: field_block
repeat: eos
if: false # add this
The if: false works better than omitting it from seq entirely, because the kaitai-struct-compiler has occasional troubles with unused types (when compiling the KSY spec with unused types, you may get an error "Unable to derive _parent type in ..." due to a compiler bug). But with this if: false trick, you can't run into them because the field_block type is no longer unused.

IEventBroker subscription handles the same event more than once and handles incorrectly

I am bootstrapping the IEventBroker in a compat-layer Eclipse RCP app.
I have two views: Triggerer and Receiver.
Triggerer (excerpts):
private IEventBroker eventBroker = PlatformUI.getWorkbench().getService(IEventBroker.class);
btn.addSelectionListener(new SelectionAdapter() {
public void widgetSelected(SelectionEvent e) {
IStructuredSelection selection = viewer.getStructuredSelection();
List selectionList = selection.toList();
for (Object s : selectionList) {
if (s instanceof MyObject) {
matches.add(s);
}
}
eventBroker.send(MyEventConstants.TOPIC_OBJECT_CHANGED, matches);
}
}
Receiver (excerpts):
#Override
public void handleEvent(Event event) {
Object data = event.getProperty(EVENT_DATA);
switch (event.getTopic()) {
case MyEventConstants.TOPIC_OBJECT_CHANGED:
try {
if (data instanceof ArrayList) {
List<MyObject> matches = null;
try {
matches = (List<MyObject>) data;
}
catch (ClassCastException e) {
}
Subthing sub = buildSubthing(matches);
getContentViewer().getContents()
.setAll(Collections.singletonList(sub));
}
}
break;
}
}
buildSubthing does stuff with the respective received data, and sets it to the contents of a GEF4 editor.
In some cases this works just fine, in some it doesn't.
handleEvent() is triggered more than once, although the event hashCode is always the same, and I don't understand why. The topic is the same and the data is also the same. However, buildSubthing just stalls for no apprent reason with some data while it doesn't for other. The data is structurally the same in both cases.
How can I control how often handleEvent is called, as I think the number of times it's called is the reason while the Subthing is sometimes not correctly constructed?

Create observables using straight methods

I need to recollect some data calling to a method is connecting to a webservice.
problem: Imagine I need to update the content text of a label control according to this remote gathered information. Until all this data is recollected I'm not going to be able to show the label.
desired: I'd like to first show the label with a default text, and as I'm receiving this information I want to update the label content (please, don't take this description as a sucked code, I'm trying to brief my real situation).
I'd like to create an observable sequence of these methods. Nevertheless, these method have not the same signature. For example:
int GetInt() {
return service.GetInt();
}
string GetString() {
return service.GetString();
}
string GetString2 {
return service.GetString2();
}
These methods are not async.
Is it possible to create an observable sequence of these methods?
How could I create it?
Nevertheless, which's the best alternative to achieve my goal?
Creating custom observable sequences can be achieved with the Observable.Create. An example using your requirements is shown below:
private int GetInt()
{
Thread.Sleep(1000);
return 1;
}
private string GetString()
{
Thread.Sleep(1000);
return "Hello";
}
private string GetString2()
{
Thread.Sleep(2000);
return "World!";
}
private IObservable<string> RetrieveContent()
{
return Observable.Create<string>(
observer =>
{
observer.OnNext("Default Text");
int value = GetInt();
observer.OnNext($"Got value {value}. Getting string...");
string string1 = GetString();
observer.OnNext($"Got string {string1}. Getting second string...");
string string2 = GetString2();
observer.OnNext(string2);
observer.OnCompleted();
return Disposable.Empty;
}
);
}
Note how I have emulated network delay by introducing a Thread.Sleep call into each of the GetXXX methods. In order to ensure your UI doesn't hang when subscribing to this observable, you should subscribe as follows:
IDisposable subscription = RetrieveContent()
.SubscribeOn(TaskPoolScheduler.Default)
.ObserveOn(DispatcherScheduler.Current)
.Subscribe(text => Label = text);
This code uses the .SubscribeOn(TaskPoolScheduler.Default) extension method to use a TaskPool thread to start the observable sequence and will be blocked by the calls the Thread.Sleep but, as this is not the UI thread, your UI will remain responsive. Then, to ensure we update the UI on the UI thread, we use the ".ObserveOn(DispatcherScheduler.Current)" to invoke the updates onto the UI thread before setting the (data bound) Label property.
Hope this is what you were looking for, but leave a comment if not and I'll try to help further.
I would look at creating a wrapper class for your service to expose the values as separate observables.
So, start with a service interface:
public interface IService
{
int GetInt();
string GetString();
string GetString2();
}
...and then you write ServiceWrapper:
public class ServiceWrapper : IService
{
private IService service;
private Subject<int> subjectGetInt = new Subject<int>();
private Subject<string> subjectGetString = new Subject<string>();
private Subject<string> subjectGetString2 = new Subject<string>();
public ServiceWrapper(IService service)
{
this.service = service;
}
public int GetInt()
{
var value = service.GetInt();
this.subjectGetInt.OnNext(value);
return value;
}
public IObservable<int> GetInts()
{
return this.subjectGetInt.AsObservable();
}
public string GetString()
{
var value = service.GetString();
this.subjectGetString.OnNext(value);
return value;
}
public IObservable<string> GetStrings()
{
return this.subjectGetString.AsObservable();
}
public string GetString2()
{
var value = service.GetString2();
this.subjectGetString2.OnNext(value);
return value;
}
public IObservable<string> GetString2s()
{
return this.subjectGetString2.AsObservable();
}
}
Now, assuming that you current service is called Service, you would write this code to set things up:
IService service = new Service();
ServiceWrapper wrapped = new ServiceWrapper(service); // Still an `IService`
var subscription =
Observable
.Merge(
wrapped.GetInts().Select(x => x.ToString()),
wrapped.GetStrings(),
wrapped.GetString2s())
.Subscribe(x => label.Text = x);
IService wrappedService = wrapped;
Now pass wrappedService instead of service to your code. It's still calling the underlying service code so no need for a re-write, yet you still are getting the observables that you want.
This is effectively a gang of four decorator pattern.

Entity Framework + ODATA: side-stepping the pagination

The project I'm working on has the Entity Framework on top of an OData layer. The Odata layer has it's server side pagination turned to a value of 75. My reading on the subject leads me to believe that this pagination value is used across the board, rather than a per table basis. The table that I'm currently looking to extract all the data from is, of course, more than 75 rows. Using the entity framework, my code is simply thus:
public IQueryable<ProductColor> GetProductColors()
{
return db.ProductColors;
}
where db is the entity context. This is returning the first 75 records. I read something where I could append a parameter inlinecount set to allpages giving me the following code:
public IQueryable<ProductColor> GetProductColors()
{
return db.ProductColors.AddQueryOption("inlinecount","allpages");
}
However, this too returns 75 rows!
Can anyone shed light on how to truly get all the records regardless of the OData server-side pagination stuff?
important: I cannot remove the pagination or turn it off! It's extremely valuable in other scenarios where performance is a concern.
Update:
Through some more searching I've found an MSDN that describes how to do this task.
I'd love to be able to turn it into a full Generic method but, this was as close as I could get to a generic without using reflection:
public IQueryable<T> TakeAll<T>(QueryOperationResponse<T> qor)
{
var collection = new List<T>();
DataServiceQueryContinuation<T> next = null;
QueryOperationResponse<T> response = qor;
do
{
if (next != null)
{
response = db.Execute<T>(next) as QueryOperationResponse<T>;
}
foreach (var elem in response)
{
collection.Add(elem);
}
} while ((next = response.GetContinuation()) != null);
return collection.AsQueryable();
}
calling it like:
public IQueryable<ProductColor> GetProductColors()
{
QueryOperationResponse<ProductColor> response = db.ProductColors.Execute() as QueryOperationResponse<ProductColor>;
var productColors = this.TakeAll<ProductColor>(response);
return productColors.AsQueryable();
}
If unable turn off paging you'll receive 75 row by call, always. You can get all rows in following ways:
Add another IQueryable<ProductColor> AllProductColors and modify
public static void InitializeService(DataServiceConfiguration config)
{
config.UseVerboseErrors = true;
config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
config.SetEntitySetPageSize("ProductColors", 75); - Note only paged queries are present
config.SetServiceOperationAccessRule("*", ServiceOperationRights.AllRead);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
}
You should call ProductColors as many as needed, for example
var cat = new NetflixCatalog(new Uri("http://odata.netflix.com/v1/Catalog/"));
var x = from t in cat.Titles
where t.ReleaseYear == 2009
select t;
var response = (QueryOperationResponse<Title>)((DataServiceQuery<Title>)x).Execute();
while (true)
{
foreach (Title title in response)
{
Console.WriteLine(title.Name);
}
var continuation = response.GetContinuation();
if (continuation == null)
{
break;
}
response = cat.Execute(continuation);
}
I use Rx with following code
public sealed class DataSequence<TEntry> : IObservable<TEntry>
{
private readonly DataServiceContext context;
private readonly Logger logger = LogManager.GetCurrentClassLogger();
private readonly IQueryable<TEntry> query;
public DataSequence(IQueryable<TEntry> query, DataServiceContext context)
{
this.query = query;
this.context = context;
}
public IDisposable Subscribe(IObserver<TEntry> observer)
{
QueryOperationResponse<TEntry> response;
try
{
response = (QueryOperationResponse<TEntry>)((DataServiceQuery<TEntry>)query).Execute();
if (response == null)
{
return Disposable.Empty;
}
}
catch (Exception ex)
{
logger.Error(ex);
return Disposable.Empty;
}
var initialState = new State
{
CanContinue = true,
Response = response
};
IObservable<TEntry> sequence = Observable.Generate(
initialState,
state => state.CanContinue,
MoveToNextState,
GetCurrentValue,
Scheduler.ThreadPool).Merge();
return new CompositeDisposable(initialState, sequence.Subscribe(observer));
}
private static IObservable<TEntry> GetCurrentValue(State state)
{
if (state.Response == null)
{
return Observable.Empty<TEntry>();
}
return state.Response.ToObservable();
}
private State MoveToNextState(State state)
{
DataServiceQueryContinuation<TEntry> continuation = state.Response.GetContinuation();
if (continuation == null)
{
state.CanContinue = false;
return state;
}
QueryOperationResponse<TEntry> response;
try
{
response = context.Execute(continuation);
}
catch (Exception)
{
state.CanContinue = false;
return state;
}
state.Response = response;
return state;
}
private sealed class State : IDisposable
{
public bool CanContinue { get; set; }
public QueryOperationResponse<TEntry> Response { get; set; }
public void Dispose()
{
CanContinue = false;
}
}
}
so for get any data thru OData, create a sequence and Rx does the rest
var sequence = new DataSequence<Product>(context.Products, context);
sequence.OnErrorResumeNext(Observable.Empty<Product>())
.ObserveOnDispatcher().SubscribeOn(Scheduler.NewThread).Subscribe(AddProduct, logger.Error);
The page size is set by the service author and can be set per entity set (but a service may choose to apply the same page size to all entity sets). There's no way to avoid it from the client (which is by design since it's a security feature).
The inlinecount option asks the server to include the total count of the results (just the number), it doesn't disable the paging.
From the client the only way to read all the data is to issue the request which will return the first page and it may contain a next link which you request to read the next page and so on until the last response doesn't have the next link.
If you're using the WCF Data Services client library it has support for continuations (the next link) and a simple sample can be found in this blog post (for example): http://blogs.msdn.com/b/phaniraj/archive/2010/04/25/server-driven-paging-with-wcf-data-services.aspx

Refactoring two basic classes

How would you refactor these two classes to abstract out the similarities? An abstract class? Simple inheritance? What would the refactored class(es) look like?
public class LanguageCode
{
/// <summary>
/// Get the lowercase two-character ISO 639-1 language code.
/// </summary>
public readonly string Value;
public LanguageCode(string language)
{
this.Value = new CultureInfo(language).TwoLetterISOLanguageName;
}
public static LanguageCode TryParse(string language)
{
if (language == null)
{
return null;
}
if (language.Length > 2)
{
language = language.Substring(0, 2);
}
try
{
return new LanguageCode(language);
}
catch (ArgumentException)
{
return null;
}
}
}
public class RegionCode
{
/// <summary>
/// Get the uppercase two-character ISO 3166 region/country code.
/// </summary>
public readonly string Value;
public RegionCode(string region)
{
this.Value = new RegionInfo(region).TwoLetterISORegionName;
}
public static RegionCode TryParse(string region)
{
if (region == null)
{
return null;
}
if (region.Length > 2)
{
region = region.Substring(0, 2);
}
try
{
return new RegionCode(region);
}
catch (ArgumentException)
{
return null;
}
}
}
It depends, if they are not going to do much more, then I would probably leave them as is - IMHO factoring out stuff is likely to be more complex, in this case.
Unless you have a strong reason for refactoring (because you are going to add more classes like those in near future) the penalty of changing the design for such a small and contrived example would overcome the gain in maintenance or overhead in this scenario. Anyhow here is a possible design based on generic and lambda expressions.
public class TwoLetterCode<T>
{
private readonly string value;
public TwoLetterCode(string value, Func<string, string> predicate)
{
this.value = predicate(value);
}
public static T TryParse(string value, Func<string, T> predicate)
{
if (value == null)
{
return default(T);
}
if (value.Length > 2)
{
value = value.Substring(0, 2);
}
try
{
return predicate(value);
}
catch (ArgumentException)
{
return default(T);
}
}
public string Value { get { return this.value; } }
}
public class LanguageCode : TwoLetterCode<LanguageCode> {
public LanguageCode(string language)
: base(language, v => new CultureInfo(v).TwoLetterISOLanguageName)
{
}
public static LanguageCode TryParse(string language)
{
return TwoLetterCode<LanguageCode>.TryParse(language, v => new LanguageCode(v));
}
}
public class RegionCode : TwoLetterCode<RegionCode>
{
public RegionCode(string language)
: base(language, v => new CultureInfo(v).TwoLetterISORegionName)
{
}
public static RegionCode TryParse(string language)
{
return TwoLetterCode<RegionCode>.TryParse(language, v => new RegionCode(v));
}
}
This is a rather simple question and to me smells awefully like a homework assignment.
You can obviously see the common bits in the code and I'm pretty sure you can make an attempt at it yourself by putting such things into a super-class.
You could maybe combine them into a Locale class, which stores both Language code and Region code, has accessors for Region and Language plus one parse function which also allows for strings like "en_gb"...
That's how I've seen locales be handled in various frameworks.
These two, as they stand, aren't going to refactor well because of the static methods.
You'd either end up with some kind of factory method on a base class that returns an a type of that base class (which would subsequently need casting) or you'd need some kind of additional helper class.
Given the amount of extra code and subsequent casting to the appropriate type, it's not worth it.
Create a generic base class (eg AbstractCode<T>)
add abstract methods like
protected T GetConstructor(string code);
override in base classes like
protected override RegionCode GetConstructor(string code)
{
return new RegionCode(code);
}
Finally, do the same with string GetIsoName(string code), eg
protected override GetIsoName(string code)
{
return new RegionCode(code).TowLetterISORegionName;
}
That will refactor the both. Chris Kimpton does raise the important question as to whether the effort is worth it.
I'm sure there is a better generics based solution. But still gave it a shot.
EDIT: As the comment says, static methods can't be overridden so one option would be to retain it and use TwoLetterCode objects around and cast them, but, as some other person has already pointed out, that is rather useless.
How about this?
public class TwoLetterCode {
public readonly string Value;
public static TwoLetterCode TryParseSt(string tlc) {
if (tlc == null)
{
return null;
}
if (tlc.Length > 2)
{
tlc = tlc.Substring(0, 2);
}
try
{
return new TwoLetterCode(tlc);
}
catch (ArgumentException)
{
return null;
}
}
}
//Likewise for Region
public class LanguageCode : TwoLetterCode {
public LanguageCode(string language)
{
this.Value = new CultureInfo(language).TwoLetterISOLanguageName;
}
public static LanguageCode TryParse(string language) {
return (LanguageCode)TwoLetterCode.TryParseSt(language);
}
}