Is there a way to create macros in c#
ex:
string checkString = "'bob' == 'bobthebuilder'" (this will be dynamic)
if (##checkString)
//.........
else
//.........
Thanks
No, C# doesn't have macros. You could capture your logic in a delegate and apply that delegate in multiple places, potentially... would that help?
If you could describe the problem you're trying to solve rather than the solution you think you'd like, we may be able to help more.
T4 seems to be gaining traction these days for .NET work. It's not quite what you asked for, but it may be extremely beneficial in some cases (or it may just be a hint down the wrong path).
In most cases, esp. with generics, I do not wish for 'templates' or 'macros' in C# (or Scala). In the example above, you could simply use:
bool sameStuff = "'bob' == 'bobthebuilder'";
...
if (sameStuff) {
...
}
More complex cases can generally be dealt with refactoring methods or using anonymous functions.
Additionally, attributes (while a completely different approach) round out the case for many "traditional" uses of templates.
As mentioned, no, but there are a number of other approaches:
Conditional compilation via #if
Templating via T4 or something else (we use a port of Ned Batchelder's (mentioned) Cog
Aspect-Oriented Programming via something like PostSharp
As Jon said, lots of ways; it'd be better to describe exactly what you want to do.
Short answer: No.
Long answer: You can write a wrapper around the C/C++ compiler's preprocessor.
Most of the syntax will be accepted with the notable exception of #region/#endregion. You can just prefix those with #pragma before processing, and remove the #pragma part afterwards.
Related
I realize that it is impossible to have one language that is best for everything.
But there is a class of simple programs, whose source code looks virtually identical in any language.
I am thinking not just "hello world", but also arithmetics, maybe string manipulation, basic stuff that you would typically see in utility classes.
I would like to keep my utilities in this meta-language and have it automatically translated to a bunch of popular languages. I do this by hand right now.
Again, I do not ask for translation of every single possible program. I am thinking a very limited, simple language, but superportable.
Do you know of anything like that? Is there a reason why it should not exist?
Check Haxe, and its Wikipedia page. It's open source and its main purpose is what you describe: generating code in many languages from only one source.
Just about any language that you choose is going to have some feature that doesn't map to another in a natural way. The closest thing I can think of is probably a useful subset of JavaScript. Of course, if you are the language author you can limit it as much as you want, providing only constructs that are common to just about any language (loops, conditionals, etc.)
For purposes of mutability, an XML representation would be best, but you wouldn't want to code in it.
If you find that there is no universal language, you can try a pragmatic model-driven development approach, using a template-based code generator.
On the template you keep the underlying concepts of an algorithm. Then, you would add code for this algorithm in one or more specific languages (C++,Java,JS,Python) when necessary. You would have to do it anyway, whatever the language or approach you choose. A configuration switch would pick the correct language for any template you apply.
AtomWeaver is a code generator that works with templates and employs ABSE as the modeling approach.
I did some looking and found this.
https://www.indiegogo.com/projects/universal-programming-language
looks interesting
A classic Pascal is very simple. Oberon is another similar option. Or you could invent your own derivative language similar to the pseudocode from the computer science textbooks. It's trivial to implement a translator from one of that languages into any decent modern imperative language.
I am battling to understand why a post compiler, like PostSharp, should ever be needed?
My understanding is that it just inserts code where attributed in the original code, so why doesn't the developer just do that code writing themselves?
I expect that someone will say it's easier to write since you can use attributes on methods and then not clutter them up boilerplate code, but that can be done using DI or reflection and a touch of forethought without a post compiler. I know that since I have said reflection, the performance elephant will now enter - but I do not care about the relative performance here, when the absolute performance for most scenarios is trivial (sub millisecond to millisecond).
Let's try to take an architectural point on the issue. Say you are an architect (everyone wants to be an architect ;)
You need to deliver the architecture to your team:
a selected set of libraries, architectural patterns, and design patterns. As a part of your design, you say: "we will implement caching using the following design pattern:"
string key = string.Format("[{0}].MyMethod({1},{2})", this, param1, param2 );
T value;
if ( !cache.TryGetValue( key, out value ) )
{
using ( cache.Lock(key) )
{
if (!cache.TryGetValue( key, out value ) )
{
// Do the real job here and store the value into variable 'value'.
cache.Add( key, value );
}
}
}
This is a correct way to do tracing. Developers are going to implement this pattern thousands of times, so you write a nice Word document telling how you want the pattern to be implemented. Yeah, a Word document. Do you have a better solution? I'm afraid you don't. Classic code generators won't help. Functional programming (delegates)? It works fairly well for some aspects, but not here: you need to pass method parameters to the pattern. So what's left? Describe the pattern in natural language and trust developers will implement them.
What will happen?
First, some junior developer will look at the code and tell "Hm. Two cache lookups. Kinda useless. One is enough." (that's not a joke -- ask the DNN team about this issue). And your patterns cease to be thread-safe.
As an architect, how do you ensure that the pattern is properly applied? Unit testing? Fair enough, but you will hardly detect threading issues this way. Code review? That's maybe the solution.
Now, what is you decide to change the pattern? For instance, you detect a bug in the cache component and decide to use your own? Are you going to edit thousands of methods? It's not just refactoring: what if the new component has different semantics?
What if you decide that a method is not going to be cached any more? How difficult will it be to remove caching code?
The AOP solution (whatever the framework is) has the following advantages over plain code:
It reduces the number of lines of code.
It reduces the coupling between components, therefore you don't have to change much things when you decide to change the logging component (just update the aspect), therefore it improves the capacity of your source code to cope with new requirements over time.
Because there is less code, the probability of bugs is lower for a given set of features, therefore AOP improves the quality of your code.
So if you put it all together:
Aspects reduce both development costs and maintenance costs of software.
I have a 90 min talk on this topic and you can watch it at http://vimeo.com/2116491.
Again, the architectural advantages of AOP are independent of the framework you choose. The differences between frameworks (also discussed in this video) influence principally the extent to which you can apply AOP to your code, which was not the point of this question.
Suppose you already have a class which is well-designed, well-tested etc. You want to easily add some timing on some of the methods. Yes, you could use dependency injection, create a decorator class which proxies to the original but with timing for each method - but even that class is going to be a mess of repetition...
... or you can add reflection to the mix and use a dynamic proxy of some description, which lets you write the timing code once, but requires you to get that reflection code just right -which isn't as easy as it might be, especially if generics are involved.
... or you can add an attribute to each method that you want timed, write the timing code once, and apply it as a post-compile step.
I know which seems more elegant to me - and more obvious when reading the code. It can be applied even in situations where DI isn't appropriate (and it really isn't appropriate for every single class in a system) and with no other changes elsewhere.
AOP (PostSharp) is for attaching code to all sorts of points in your application, from one location, so you don't have to place it there.
You cannot achieve what PostSharp can do with Reflection.
I personally don't see a big use for it, in a production system, as most things can be done in other, better, ways (logging, etc).
You may like to review the other threads on this matter:
Anyone with Postsharp experience in production?
Other than logging, and transaction management what are some practical applications of AOP?
Aspect Oriented Programming: What do you use PostSharp for?
etc (search)
Aspects take away all the copy & paste - code and make adding new features faster.
I hate nothing more than, for example, having to write the same piece of code over and over again. Gael has a very nice example regarding INotifyPropertyChanged on his website (www.postsharp.net).
This is exactly what AOP is for. Forget about the technical details, just implement what you are being asked for.
In the long run, I think we all should say goodbye to the way we are writing software now. It's tedious and plainly stupid to write boilerplate code and iterate manually.
The future belongs to declarative, functional style being held together by an object oriented framework - and the cross cutting concerns being handled by aspects.
I guess the only people who will not get it soon are the guys who are still payed for lines of code.
Currently I am making some decisions for my first objective-c API. Nothing big, just a little help for myself to get things done faster in the future.
After reading a few hours about different patterns like making categories, singletons, and so on, I came accross something that I like because it seems easy to maintain for me. I'm making a set of useful functions, that can be useful everywhere.
So what I did is:
1) I created two new files (.h, .m), and gave the "class" a name: SLUtilsMath, SLUtilsGraphics, SLUtilsSound, and so on. I think of that as kind of "namespace", so all those things will always be called SLUtils******. I added all of them into a Group SL, which contains a subgroup SLUtils.
2) Then I just put my functions signatures in the .h file, and the implementations of the functions in the .m file. And guess what: It works!! I'm happy with it, and it's easy to use. The only nasty thing about it is, that I have to include the appropriate header every time I need it. But that's okay, since that's normal. I could include it in the header prefix pch file, though.
But then, I went to toilet and a ghost came out there, saying: "Hey! Isn't it better to make real methods, instead of functions? Shouldn't you make class methods, so that you have to call a method rather than a function? Isn't that much cooler and doesn't it have a better performance?" Well, for readability I prefer the functions. On the other hand they don't have this kind of "named parameters" like methods, a.f.a.i.k..
So what would you prefer in that case?
Of course I dont want to allocate an object before using a useful method or function. That would be harrying.
Maybe the toilet ghost was right. There IS a cooler way. Well, for me, personally, this is great:
MYNAMESPACECoolMath.h
#import <Foundation/Foundation.h>
#interface MYNAMESPACECoolMath : NSObject {
}
+ (float)randomizeValue:(float)value byPercent:(float)percent;
+ (float)calculateHorizontalGravity:(CGPoint)p1 andPoint:(CGPoint)p2;
// and some more
#end
Then in code, I would just import that MYNAMESPACECoolMath.h and just call:
CGFloat myValue = [MYNAMESPACECoolMath randomizeValue:10.0f byPercent:5.0f];
with no nasty instantiation, initialization, allocation, what ever. For me that pattern looks like a static method in java, which is pretty nice and easy to use.
The advantage over a function, is, as far as I noticed, the better readability in code. When looking at a CGRectMake(10.0f, 42.5f, 44.2f, 99.11f) you'll may have to look up what those parameters stand for, if you're not so familiar with it. But when you have a method call with "named" parameters, then you see immediately what the parameter is.
I think I missed the point what makes a big difference to a singleton class when it comes to simple useful methods / functions that can be needed everywhere. Making special kind of random values don't belong to anything, it's global. Like grass. Like trees. Like air. Everyone needs it.
Performance-wise, a static method in a static class compile to almost the same thing as a function.
Any real performance hits you'd incur would be in object instantiation, which you said you'd want to avoid, so that should not be an issue.
As far as preference or readability, there is a trend to use static methods more than necessary because people are viewing Obj-C is an "OO-only" language, like Java or C#. In that paradigm, (almost) everything must belong to a class, so class methods are the norm. In fact, they may even call them functions. The two terms are interchangeable there. However, this is purely convention. Convention may even be too strong of a word. There is absolutely nothing wrong with using functions in their place and it is probably more appropriate if there are no class members (even static ones) that are needed to assist in the processing of those methods/functions.
The problem with your approach is the "util" nature of it. Almost anything with the word "util" it in suggests that you have created a dumping ground for things you don't know where to fit into your object model. That probably means that your object model is not in alignment with your problem space.
Rather than working out how to package up utility functions, you should be thinking about what model objects these functions should be acting upon and then put them on those classes (creating the classes if needed).
To Josh's point, while there is nothing wrong with functions in ObjC, it is a very strongly object-oriented language, based directly on the grand-daddy of object-oriented languages, Smalltalk. You should not abandon the OOP patterns lightly; they are the heart of Cocoa.
I create private helper functions all the time, and I create public convenience functions for some objects (NSLocalizedString() is a good example of this). But if you're creating public utility functions that aren't front-ends to methods, you should be rethinking your patterns. And the first warning sign is the desire to put the word "util" in a file name.
EDIT
Based on the particular methods you added to your question, what you should be looking at are Categories. For instance, +randomizeValue:byPercent: is a perfectly good NSNumber category:
// NSNumber+SLExtensions.h
- (double)randomizeByPercent:(CGFloat)percent;
+ (double)randomDoubleNear:(CGFloat)percent byPercent:(double)number;
+ (NSNumber *)randomNumberNear:(CGFloat)percent byPercent:(double)number;
// Some other file that wants to use this
#import "NSNumber+SLExtensions.h"
randomDouble = [aNumber randomizeByPercent:5.0];
randomDouble = [NSNumber randomDoubleNear:5.0 byPercent:7.0];
If you get a lot of these, then you may want to split them up into categories like NSNumber+Random. Doing it with Categories makes it transparently part of the existing object model, though, rather than creating classes whose only purpose is to work on other objects.
You can use a singleton instance instead if you want to avoid instantiating a bunch of utility objects.
There's nothing wrong with using plain C functions, though. Just know that you won't be able to pass them around using #selector for things like performSelectorOnMainThread.
When it comes to performance of methods vs. functions, Mike Ash has some great numbers in his post "Performance Comparisons of Common Operations". Objective-C message send operations are extremely fast, so much so that you'd have to have a really tight computational loop to even see the difference. I think that using functions vs. methods in your approach will come down to the stylistic design issues that others have described.
Optimise the system, not the function calls.
Implement what is easiest to understand and then when the whole system works, profile it and speed up what's slow. I doubt very much that the objective-c runtime overhead of a static class is going to matter one bit to your whole app.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
C# 'var' keyword versus explicitly defined variables
EDIT:
For those who are still viewing this, I've completely changed my opinion on var. I think it was largely due to the responses to this topic that I did. I'm an avid 'var' user now, and I think its proponents comments below were absolutely correct in pretty much all cases. I think the thing I like most about var is it REALLY DOES reduce repetition (conforms to DRY), and makes your code considerably cleaner. It supports refactoring (when you need to change the return type of something, you have less code cleanup to deal with, and NO, NOT everyone has a fancy refactoring tool!), and anecdotally, people don't really seem to have a problem not knowing the specific type of a variable up front (its easy enough to "discover" the capabilities of a type on-demand, which is generally a necessity anyway, even if you DO know the name of a type.)
So here's a big applause for the 'var' keyword!!
This is a relatively simple question...more of a poll really. I am a HUGE fan of C#, and have used it for over 8 years, since before .NET was first released. I am a fan of all of the improvements made to the language, including lambda expressions, extension methods, LINQ, and anonymous types. However, there is one feature from C# 3.0 that I feel has been SORELY misused....the 'var' keyword.
Since the release of C# 3.0, on blogs, forums, and yes, even Stackoverflow, I have seen var replace pretty much every variable that has been written! To me, this is a grave misuse of the feature, and leads to very arbitrary code that can have many obfuscated bugs due to the lack in clarity of what type a variable actually is.
There is only a single truly valid use for 'var' (in my opinion at least). What is that valid use, you ask? The only valid use is when you are incapable of knowing the type, and the only instance where that can happen:
When accessing an anonymous type
Anonymous types have no compile-time identity, so var is the only option. It's the only reason why var was added...to support anonymous types.
So...whats your opinion? Given the prolific use of var on blogs, forums, suggested/enforced by tools like ReSharper, etc. many up and coming developers will see it as a completely valid thing.
Do you think var should be used so prolifically?
Do you think var should ever be used for anything other than an anonymous type?
Is it acceptable to use in code posted to blogs to maintain brevity...terseness? (Not sure about the answer this one myself...perhaps with a disclaimer)
Should we, as a community, encourage better use of strongly typed variables to improve code clarity, or allow C# to become more vague and less descriptive?
I would like to know the communities opinions. I see var used a lot, but I have very little idea why, and perhapse there is a good reason (i.e. brevity/terseness.)
var is a splendid idea to help implement a key principle of good programming: DRY, i.e., Don't Repeat Yourself.
VeryComplicatedType x = new VeryComplicatedType();
is bad coding, because it repeats VeryComplicatedType, and the effects are all negative: more verbose and boilerplatey code, less readability, silly "makework" for both the reader and the writer of the code. Because of all this, I count var as a very useful enhancement in C# 3 compared to Java and previous versions of C#.
Of course it can be mildly misused, by using as the RHS an expression whose type is not clear and obvious (e.g., a call to a method whose declaration may be far away) -- such misuse may decrease readability (by forcing the reader to hunt for the method's declaration or ponder deeply about some other subtle expression's type) instead of increasing it. But if you stick to using var to avoid repetition, you'll be in its sweet spot, and no misuse.
I think it should be used in those situations where the type is clearly specified elsewhere in the same statement:
Dictionary<string, List<int>> myHashMap = new Dictionary<string, List<int>>();
is a pain to read. This could be replaced by the following with no loss of clarity:
var myHashMap = new Dictionary<string, List<int>>();
Pop quiz!
What type is this:
var Foo = new string[]{"abc","123","yoda"};
How about this:
var Bar = {"abc","123","yoda"};
It takes me roughly no longer to determine what types those are than with the explicity redundant specification of the type. As a programmer I have no issues with letting a compiler figure out things that are obvious for me. You may disagree.
Cheers.
Never say never. I'm pretty sure there are a bunch of questions where people have expounded their views on var, but here's mine once more.
var is a tool; use it where it's appropriate, and don't use it when it's not. You're right that the only required use of var is when addressing anonymous types, in which case you have no type name to use. Personally, I'd say any other use has to be considered in terms of readability and laziness; specifically, when avoiding use of a cumbersome type name.
var i = 5;
(Laziness)
var list = new List<Customer>();
(Convenience)
var customers = GetCustomers();
(Questionable; I'd consider it acceptable if and only if GetCustomers() returns an IEnumerable)
Read up on Haskell. It's a statically typed language in which you rarely have to state the type of anything. So it uses the same approach as var, as the standard "idiomatic" coding style.
If the compiler can figure something out for you, why write the same thing twice?
A colleague of mine was at first very opposed to var, just as you are, but has now started using it habitually. He was worried it would make programs less self-documenting, but in practice that's caused more by overly long methods.
var MyCustomers = from c in Customers
where c.City="Madrid"
select new { c.Company, c.Mail };
If I need only Company and Mail from Customers collection. It's nonsense define new type with members what I need.
If you feel that giving the same information twice reduces errors (the designers of many web forms that insist you type in your email address twice seem to agree), then you'll probably hate var. If you write a lot of code that uses complicated type specifications then it's a godsend.
EDIT: To exapand this a bit (incase it sounds like I'm not in favour of var):
In the UK (at least at the time I went), it was standard practice to make Computer Science students learn how to program in Standard ML. Like other functional languages it has a type system that puts languages in the C++/Java mould to shame.
Anyway, what I noticed at the time (and heard similar remarks from other students) was that it was a nightmare to get your SML programs to compile because the compiler was so increadibly picky about types, but once the did compile, they almost always ran without error.
This aspect of SML (and other functional languages) seems to be one the questioner sees as a 'good thing' - i.e. that anything that helps the compiler catch more errors at compile time is good.
Now here's the thing with SML: it uses type inference exclusively when assigning. So I don't think type inference can be inherently bad.
I agree with others that var eliminates redundancy. I have decided to use var where it eliminates redundancy as much as possible. I think consistency is important. Choose a style and stick with it through a project.
As Earwicker indicated, there are some functional languages, Haskell being one and F# being another, where such type inference is used much more pervasively -- the C# analogy would be declaring the return types and parameter types of methods as "var", and then having the compiler infer the static type for you. Static and explicit typing are two orthogonal concerns.
In fact, is it even correct to say that use of "var" is dynamic typing? From what I understood, that's what the new "dynamic" keyword in C# 4.0 is for. "var" is for static type inference. Correct me if I am wrong.
I must admit when i first saw the var keyword pop up i was very skeptical.
However it is definitely an easy way to shorten the lines of a new declaration, and i use it all the time for that.
However when i change the type of an underlying method, and accept the return type using var. I do get the occasional run time error. Most are still picked up by the compiler.
The secound issue i run into is when i am not sure what method to use (and i am simply looking through the auto complete). IF i choose the wrong one and expect it to be type FOO and it is type BAR then it takes a while to figure that out.
If i had of literally specified the variable type in both cases it would have saved a bit of frustration.
overall the benefits exceed the problems.
I have to dissent with the view that var reduces redundancy in any meaningful way. In the cases that have been put forward here, type inference can and should come out of the IDE, where it can be applied much more liberally with no loss of readability.
I'm investigating using DbC in our Perl projects, and I'm trying to find the best way to verify contracts in the source (e.g. checking pre/post conditions, invariants, etc.)
Class::Contract was written by Damian Conway and is now maintained by C. Garret Goebel, but it looks like it hasn't been touched in over 8 years.
It looks like what I want to use is Moose, as it seems as though it might offer functionality that could be used for DbC, but I was wondering if anyone had any resources (articles, etc.) on how to go about this, or if there are any helpful modules out there that I haven't been able to find.
Is anyone doing DbC with Perl? Should I just "jump in" to Moose and see what I can get it to do for me?
Moose gives you a lot of the tools (if not all the sugar) to do DbC. Specifically, you can use the before, after and around method hooks (here's some examples) to perform whatever assertions you might want to make on arguments and return values.
As an alternative to "roll your own DbC" you could use a module like MooseX::Method::Signatures or MooseX::Method to take care of validating parameters passed to a subroutine. These modules don't handle the "post" or "invariant" validations that DbC typically provides, however.
EDIT: Motivated by this question, I've hacked together MooseX::Contract and uploaded it to the CPAN. I'd be curious to get feedback on the API as I've never really used DbC first-hand.
Moose is an excellent oo system for perl, and I heartily recommend it for anyone coding objects in perl. You can specify "subtypes" for your class members that will be enforced when set by accessors or constructors (the same system can be used with the Moose::Methods package for functions). If you are coding more than one liners, use Moose;
As for doing DbC, well, might not be the best fit for perl5. It's going to be hard in a language that offers you very few guarantees. Personally, in a lot of dynamic languages, but especially perl, I tend to make my guiding philosophy DRY, and test-driven development.
I would also recommend using Moose.
However as an "alternative" take a look at Sub::Contract.
To quote the author....
Sub::Contract offers a pragmatic way to implement parts of the programming by contract paradigm in Perl.
Sub::Contract is not a design-by-contract framework.
Sub::Contract aims at making it very easy to constrain subroutines input arguments and return values in order to emulate strong typing at runtime.
If you don't need class invariants, I've found the following Perl Hacks book recommendation to be a good solution for some programs. See Smart::Comments.