upgradable smart contracts with Solidity: interface vs library? - interface

In the context of upgradable smart contracts, when should one use interfaces and when libraries?
I read several similar questions and blog posts, but none of them give a straight-to-the-point answer:
(Sub) contract vs. library vs. struct vs. Interface
How to improve smart contact design in order to distinguish data and their manipulation functions for the same domain object?
Writing upgradable contracts in Solidity
Interfaces make your Solidity contracts upgradeable
Library Driven Development in Solidity
Proxy Libraries in Solidity
Exploring Code Reuse in Solidity
I understand that the main criteria to consider (besides security) when designing for upgradability are:
modularity - for reusability and easier maintenance
gas limit - split huge contracts so that they can be deployed in several transactions, so as to not hit the gas limit
cost of upgrade - how much does each contract upgrade cost. After a (small) change in one contract, which other contracts need to be re-deployed?
cost of execution - separate contracts may result in gas overhead on each call. Try to keep that overhead low.
This Medium post suggests to use libraries to encapsulate logic (e.g. when interacting with "storage contracts") and to use interfaces to decouple inter-contract communication. Other posts suggest different techniques. As far as I understand, libraries are linked to contracts prior to deployment, so once the contract changes, libraries need to be re-deployed. Why it is not better to use interfaces for interacting with storage contracts?
Below I present the two solutions I have seen so far - one with library and one with an interface. (I'd like to avoid solutions with inline assembly...)
Solution with library
StorageWithLib.sol:
contract StorageWithLib {
uint public data;
function getData() public returns(uint) {
return data;
}
}
StorageLib.sol:
import './StorageWithLib.sol';
library StorageLib {
function getData(address _storageContract) public view returns(uint) {
return StorageWithLib(_storageContract).getData();
}
}
ActionWithLib.sol:
import './StorageLib.sol';
contract ActionWithLib {
using StorageLib for address;
address public storageContract;
function ActionWithLib(address _storageContract) public {
storageContract = _storageContract;
}
function doSomething() public {
uint data = storageContract.getData();
// do something with data ...
}
}
Solution with interface
IStorage.sol:
contract IStorage {
function getData() public returns(uint);
}
StorageWithInterface.sol:
import './IStorage.sol';
contract StorageWithInterface is IStorage {
uint public data;
function getData() public returns(uint) {
return data;
}
}
ActionWithInterface.sol:
import './IStorage.sol';
contract ActionWithInterface {
IStorage public storageContract;
function ActionWithInterface(address _storageContract) public {
storageContract = IStorage(_storageContract);
}
function doSomething() public {
uint data = storageContract.getData();
// do something with data ...
}
}
Considering the above criteria, which solution is preferred for separating storage and logic, and why? In which other cases is the other solution better?

Libraries and Interfaces are really different and used in different cases. I personally don't see them as interchangeable in the contract design. Below I've tried to outline the key characteristics of the two. Note that by interface I mean an abstract contract (which is what you have in your example above). There are still issues imo in Interfaces in Solidity which I highlighted previously here https://medium.com/#elena_di/hi-there-answers-below-6378b08cfcef
Libraries:
Can contain logic and are used to extract code away from the contract for maintainability and reuse purposes
Deployed once, then referenced in contracts. Their bytecode is deployed separately and is NOT part of the contracts that references them. This is defined as a singleton in my article above ("Writing upgradable contracts in Solidity") where I explain the benefits such as lower deployment cost.
Abstract contracts / Interfaces
Cannon contain logic just interface definition
Mainly useful as imports to other contracts providing interaction with contract implementations. Interfaces have a much smaller deploy/import size than the implementer contract
Provide abstraction for upgradability which I've also described in my article in section on "Use ‘interfaces’ to decouple inter-contract communication"
The only similarity I could think between the two above is that they both can't contain storage variables.

I'm hoping someone can weigh in with a better answer, but this is a good question and wanted to give my opinion.
In short, as it relates specifically to upgradeable contracts, I don't think there is really a difference between the two approaches. With either implementation, you still have a separate storage contract and you are still issuing a call to the storage contract (one from the action contract through the interface and the other from the action contract indirectly through the library).
The only concrete difference comes with gas consumption. Going through the interface, you are issuing a single call operation. Going through a library, you are adding one layer of abstraction and end up with a delegatecall followed by a call. The gas overhead is not very big, though, so at the end of the day, I think the two approaches are very similar. The approach you take is personal preference.
This is not meant to imply that libraries in general aren't useful. I use them a lot for code reuse for common data structures or operations (for example, since iterating in Solidity is a pain, I have a library to use as a basic iterable collection). I just don't see much value in using them for my storage contract integration. I'm curious to see if Solidity experts out there have a different view.

Related

PIMPL Patterns in "high-level" languages - Possible/Applicable?

In the C++ world, there are two well-known strategies for maintaining binary compatibility with a library:
Interfaces: all public classes are "interface" classes (only pure virtual methods, no members), which are implemented by private subclasses in the library
PIMPL pattern: all public classes hold a single member which is a pointer to a forward-declared class, whose definition is private to the library
Both of these achieve binary stability, but #1 comes with some major disadvantages. The primary one, I believe, is that in the library, methods that accept instances of the public interface classes almost always must immediately force-downcast them to the private implementation classes. The use of interfaces incorrectly signals to clients that they are free to supply their own implementations of these interfaces, which if they ever do, will immediately fail on one of these force-downcasts. Unless polymorphism is the goal, the use of interfaces is arguably the wrong design.
Now let's consider "high-level" languages like Java, Kotlin, C# and Swift (and maybe even Typescript and Ruby). We can certainly adopt strategy #1. However this strategy suffers from the same concerns mentioned above.
But what about the PIMPL pattern? There's no such thing as "forward declaration" in these languages, but we can't even separate the class definition and implementation into different files. The compiler does this for us when it creates the package. So does an analogous pattern exist in these languages that "hides" the private details in the sense that it lets us freely modify private details without breaking binary compatibility?
Which leads to the next question...
Is it even necessary to begin with to "hide" class innards to achieve binary stability in those languages? This is necessary in C++ because of its on-stack value semantics, which makes compiled client code sensitive to the memory size of the library's classes. But to my knowledge, class instances in the "high-level" languages aren't moved around on the call stack, and instead work more like pointers/references would in C++, which may render the concern moot. If that's true, we can simply write classes "naively", and be sure that the binary compatibility remains stable as long as we don't mess with public methods/members. We could, however, do whatever we wish with private members, even if it entails changing the memory size of the public classes, and it wouldn't force client code to be recompiled.
So, in summary: are PIMPL patterns possible in these languages, or does the concept not even apply because there's no problem of private details "leaking" into the binary interface to begin with?

Where to place reusable code accessible for controllers and models

I have some functionality related to currency conversions in my Zend project. I'd like to make use of the functionality in Controllers as well as Models. Is there a best practice for where to put this code? Or is the fact that the functionality's used in both places an indicator that perhaps I should rethink the structure of the project so it's not needed in both places?
I think the purists would argue that if you're doing currency conversions in your controller code then you're probably doing something wrong, as there shouldn't really be any business logic in there. However, sometimes practical considerations outweigh purist concerns. Let's assume that this is one such case. :-)
If your currency class is a fairly simple utility-type class, then I'd lean towards creating a new directory under "application" called "utils" and then adding that directory to the resource loader in the application bootstrap:
protected function _initResourceLoader()
{
$this->_resourceLoader->addResourceType( 'utility', 'utils', 'Utility' );
}
Then you can create a class called Application_Utility_Currency stored in the file named Currency.php in that directory and call static methods such as:
Application_Utilility_Currency::convert( $from_currency, $to_currency, $amount );
This approach would be especially useful if you had other utility classes that were also looking for a home.
However, if your currency class contains richer functionality (such as connecting to external services to obtain exchange rate data, etc), then it would, IMO, be better to treat it as a "Service" rather than a "Utility". My definition of "model" is fairly loose and includes all data-related services, whether that data is located in the application database or elsewhere, so if the class is of the more complex variety, then I would just stick it in with the other models.

Do Extension Methods Hide Dependencies?

All,
Wanted to get a few thoughts on this. Lately I am becoming more and more of a subscriber of "purist" DI/IOC principles when designing/developing. Part of this (a big part) involves making sure there is little coupling between my classes, and that their dependencies are resolved via the constructor (there are certainly other ways of managing this, but you get the idea).
My basic premise is that extension methods violate the principles of DI/IOC.
I created the following extension method that I use to ensure that the strings inserted into database tables are truncated to the right size:
public static class StringExtensions
{
public static string TruncateToSize(this string input, int maxLength)
{
int lengthToUse = maxLength;
if (input.Length < maxLength)
{
lengthToUse = input.Length;
}
return input.Substring(0, lengthToUse);
}
}
I can then call my string from within another class like so:
string myString = "myValue.TruncateThisPartPlease.";
myString.TruncateToSize(8);
A fair translation of this without using an extension method would be:
string myString = "myValue.TruncateThisPartPlease.";
StaticStringUtil.TruncateToSize(myString, 8);
Any class that uses either of the above examples could not be tested independently of the class that contains the TruncateToSize method (TypeMock aside). If I were not using an extension method, and I did not want to create a static dependency, it would look more like:
string myString = "myValue.TruncateThisPartPlease.";
_stringUtil.TruncateToSize(myString, 8);
In the last example, the _stringUtil dependency would be resolved via the constructor and the class could be tested with no dependency on the actual TruncateToSize method's class (it could be easily mocked).
From my perspective, the first two examples rely on static dependencies (one explicit, one hidden), while the second inverts the dependency and provides reduced coupling and better testability.
So does the use of extension methods conflict with DI/IOC principles? If you're a subscriber of IOC methodology, do you avoid using extension methods?
I think it's fine - because it's not like TruncateToSize is a realistically replaceable component. It's a method which will only ever need to do a single thing.
You don't need to be able to mock out everything - just services which either disrupt unit testing (file access etc) or ones which you want to test in terms of genuine dependencies. If you were using it to perform authentication or something like that, it would be a very different matter... but just doing a straight string operation which has absolutely no configurability, different implementation options etc - there's no point in viewing that as a dependency in the normal sense.
To put it another way: if TruncateToSize were a genuine member of String, would you even think twice about using it? Do you try to mock out integer arithmetic as well, introducing IInt32Adder etc? Of course not. This is just the same, it's only that you happen to be supplying the implementation. Unit test the heck out of TruncateToSize and don't worry about it.
I see where you are coming from, however, if you are trying to mock out the functionality of an extension method, I believe you are using them incorrectly. Extension methods should be used to perform a task that would simply be inconvenient syntactically without them. Your TruncateToLength is a good example.
Testing TruncateToLength would not involve mocking it out, it would simply involve the creation of a few strings and testing that the method actually returned the proper value.
On the other hand, if you have code in your data layer contained in extension methods that is accessing your data store, then yes, you have a problem and testing is going to become an issue.
I typically only use extension methods in order to provide syntactic sugar for small, simple operations.
Extension methods, partial classes and dynamic objects. I really like them, however you must tread carefully , there be monsters here.
I would take a look at dynamic languages and see how they cope with these sort of problems on a day to day basis, its really enlightening. Especially when they have nothing to stop them from doing stupid things apart from good design and discipline. Everything is dynamic at run time, the only thing to stop them is the computer throwing a major run time error. "Duck Typing" is the maddest thing I have ever seen, good code is down to good program design, respect for others in your team, and the trust that every member, although have the ability to do some wacky things choose not to because good design leads to better results.
As for your test scenario with mock objects/ICO/DI, would you really put some heavy duty work in an extension method or just some simple static stuff that operate in a functional type way? I tend to use them like you would in a functional programming style, input goes in, results come out with no magic in the middle, just straight up framework classes that you know the guys at MS have designed and tested :P that you can rely on.
If your are doing some heavy lifting stuff using extension methods I would look at your program design again, check out your CRC designs, Class models, Use Cases, DFD's, action diagrams or whatever you like to use and figure out where in this design you planned to put this stuff in an extension method instead of a proper class.
At the end of the day, you can only test against your system design and not code outside of your scope. If you going to use extension classes, my advice would be to look at Object Composition models instead and use inheritance only when there is a very good reason.
Object Composition always wins out with me as they produce solid code. You can plug them in, take them out and do what you like with them. Mind you this all depends on whether you use Interfaces or not as part of your design. Also if you use Composition classes, the class hierarchy tree gets flattened into discrete classes and there are fewer places where your extension method will be picked up through inherited classes.
If you must use a class that acts upon another class as is the case with extension methods, look at the visitor pattern first and decide if its a better route.
Its a pain because they are hard to mock. I usually use one of these strategies
Yep, scrap the extension its a PITA to mock out
Use the extension and just test that it did the right thing. i.e. pass data into the truncate and check it got truncated
If it's not some trivial thing, and I HAVE to mock it, I'll make my extension class have a setter for the service it uses, and set that in the test code.
i.e.
static class TruncateExtensions{
public ITruncateService Service {private get;set;}
public string TruncateToSize(string s, int size)
{
return (Service ?? Service = new MyDefaultTranslationServiceImpl()). TruncateToSize(s, size);
}
}
This is a bit scary because someone might set the service when they shouldn't, but I'm a little cavalier sometimes, and if it was really important, I could do something clever with #if TEST flags, or the ServiceLocator pattern to avoid the setter being used in production.

How do interfaces work and How to use them in practical programming [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I started my programming life not long ago. It is said that if you combine data and algorithm together, then you got a program. That is exactly what I do now, yet I hear these theories:
Build for today, design for tomorrow
Before coding, make each module as clear as possible
Your code should not be understandable by just a handful of people
Prepare for predictable changes
All in all, I come across no theories about interfaces. I wonder where and when to use interfaces to meets good standards.
Maybe you know a lot. Then give me some tips! Examples are wonderful!
Interface (computer science)
Application programming
interface
Sorry if the answer is very general, but it's as general as the question.
Interfaces: Why can't I seem to grasp them?
Understanding interfaces
Interfaces are useful when you are designing your code to be highly testable. If you're referencing interfaces instead of concrete classes, it's about a million times easier to test them.
Interfaces are also useful when you are defining behaviours that should be standard across an application or a framework. Think about things like IDisposable and INamingContainer that have varying degrees of usefulness in many places. IDispose.Dispose() is used to release unmanaged resources. The "how" is up to the implementer but the fact it exists signifies something to the outside world.
In Java terms, the JDBC API is an excellent example of the power of interfaces. The DB vendors supplies their own JDBC driver (which is just a concrete implementation of the JDBC API) and you can just program uniform JDBC code without worrying about compatibility with tens of different databases.
When you learn to drive, you are concerned about the interface of the car (the pedal, the brakes, the steering wheel), not its implementation: disk brakes and drum brakes are accessed through the same interface (the pedal). You are not concerned about their nature, how they are driven etc... unless you are a mechanic. When you drive, you just access them by their generic interface. Performance can be different, but behavior is not.
There are two contral issues when programming, one technical, the other business-oriented
managing complexity. Complexity comes from number of entities and number of interactions among these entities.
parallelizing development tasks to achieve the release before your competitor, possibly with a better product (although it is not required these days).
Dealing with interfaces-oriented programming is a good method to solve both problems. The more you are able to hide complexity behind a funnel of a well-defined, generic interface, the less interactions you have (because you now see a large number of entities as a single, whole, complex entity no longer made of subparts), and the better you can parallelize development tasks, because everybody solves the problem he is competent in, without having to mess with a field he is not.
If you have read all this jargon, but not yet found out what they mean, then you are not reading all the right books.
Try:
Robert C. Martin (unclebob)'s Clean Code.
Michael Feather's "Working effectively with Legacy Code".
I know it helped me a lot.
Cheers
i try to simplify with a pragmatic example:
////////////////////
//VERY(!) simplified, real forums are more complex.
public interface ForumActions{
/** #throws UserDisallowedException, email is not allowed to post. */
Thread startThread(String email, String threadName, String description)
throws UserDisallowedException;
/** #throws UserDisallowedException, email is not allowed to comment. */
void commentThread(String email, Thread thread, String comment)
throws UserDisallowedException;
}
///////////////////////////
//IMPLEMENTATION 1
public class StackOverflowForum implements ForumActions{
/**
* #param email Email of poster, which must have been registered before.
*
* #throws UserDisallowedException The mail address is not registered
*/
public Thread startThread(String email, String threadName, String description)
throws UserDisallowedException {
if(isNotRegistered(email))
throw new UserDisallowedException(...);
...
return thread;
}
}
...
}
///////////////////////////
//IMPLEMENTATION 2
public class JavaRanchForum implements ForumActions{
/**
* #param email Email of poster, which mustn't have exceeded a posting-limit per day.
* This way we protect against spammers.
*
* #throws UserDisallowedException, the mail was identified as a spamming origin.
* is mandatory for posting threads.
*/
public Thread startThread(String email, String threadName, String description)
throws UserDisallowedException {
if(isSpammer(email))
throw new UserDisallowedException(...);
...
return thread;
}
}
...
}
as you see the interface itself is a contract, which is a combination of:
'how' do i call it: e.g. what name does it have, what types are involved. syntax.
'what' does it do: what is the effect of using the interface. this is documented either by good naming and/or javadoc.
Above classes are the implementations of the interface which have to follow its contract.
Therefore interfaces shouldn't be too strict about 'how' the implementations have to implement the contract (in my case: why is the user not allowed to post/comment -> spammer vs. not-registered).
as said, very simplified example and by far NOT a complete reference for interfaces.
for very funny and practical examples about interfaces (and general object orientation) have a look at head first design patterns. it is nice for novices and also for professionals

Encapsulation in the age of frameworks

At my old C++ job, we always took great care in encapsulating member variables, and only exposing them as properties when absolutely necessary. We'd have really specific constructors that made sure you fully constructed the object before using it.
These days, with ORM frameworks, dependency-injection, serialization, etc., it seems like you're better off just relying on the default constructor and exposing everything about your class in properties, so that you can inject things, or build and populate objects more dynamically.
In C#, it's been taken one step further with Object initializers, which give you the ability to basically define your own constructor. (I know object initializers are not really custom constructors, but I hope you get my point.)
Are there any general concerns with this direction? It seems like encapsulation is starting to become less important in favor of convenience.
EDIT: I know you can still carefully encapsulate members, but I just feel like when you're trying to crank out some classes, you either have to sit and carefully think about how to encapsulate each member, or just expose it as a property, and worry about how it is initialized later. It just seems like the easiest approach these days is to expose things as properties, and not be so careful. Maybe I'm just flat wrong, but that's just been my experience, espeically with the new C# language features.
I disagree with your conclusion. There are many good ways of encapsulating in c# with all the above mentioned technologies, as to maintain good software coding practices. I would also say that it depends on whose technology demo you're looking at, but in the end it comes down to reducing the state-space of your objects so that you can make sure they hold their invariants at all times.
Take object relational frameworks; most of them allow you to specify how they are going to hydrate the entities; NHibernate for example allows you so say access="property" or access="field.camelcase" and similar. This allows you to encapsulate your properties.
Dependency injection works on the other types you have, mostly those which are not entities, even though you can combine AOP+ORM+IOC in some very nice ways to improve the state of these things. IoC is often used from layers above your domain entities if you're building a data-driven application, which I guess you are, since you're talking about ORMs.
They ("they" being application and domain services and other intrinsic classes to the program) expose their dependencies but in fact can be encapsulated and tested in even better isolation than previously since the paradigms of design-by-contract/design-by-interface which you often use when mocking dependencies in mock-based testing (in conjunction with IoC), will move you towards class-as-component "semantics". I mean: every class, when built using the above, will be better encapsulated.
Updated for urig: This holds true for both exposing concrete dependencies and exposing interfaces. First about interfaces: What I was hinting at above was that services and other applications classes which have dependencies, can with OOP depend on contracts/interfaces rather than specific implementations. In C/C++ and older languages there wasn't the interface and abstract classes can only go so far. Interfaces allow you to tie different runtime instances to the same interface without having to worry about leaking internal state which is what you're trying to get away from when abstracting and encapsulating. With abstract classes you can still provide a class implementation, just that you can't instantiate it, but inheritors still need to know about the invariants in your implementation and that can mess up state.
Secondly, about concrete classes as properties: you have to be wary about what types of types ;) you expose as properties. Say you have a List in your instance; then don't expose IList as the property; this will probably leak and you can't guarantee that consumers of the interface don't add things or remove things which you depend on; instead expose something like IEnumerable and return a copy of the List, or even better, do it as a method:
public IEnumerable MyCollection { get { return _List.Enum(); } } and you can be 100% certain to get both the performance and the encapsulation. Noone can add or remove to that IEnumerable and you still don't have to perform a costly array copy. The corresponding helper method:
static class Ext {
public static IEnumerable<T> Enum<T>(this IEnumerable<T> inner) {
foreach (var item in inner) yield return item;
}
}
So while you can't get 100% encapsulation in say creating overloaded equals operators/method you can get close with your public interfaces.
You can also use the new features of .Net 4.0 built on Spec# to verify the contracts I talked about above.
Serialization will always be there and has been for a long time. Previously, before the internet-area it was used for saving your object graph to disk for later retrieval, now it's used in web services, in copy-semantics and when passing data to e.g. a browser. This doesn't necessarily break encapsulation if you put a few [NonSerialized] attributes or the equivalents on the correct fields.
Object initializers aren't the same as constructors, they are just a way of collapsing a few lines of code. Values/instances in the {} will not be assigned until all of your constructors have run, so in principle it's just the same as not using object initializers.
I guess, what you have to watch out for is deviating from the good principles you've learnt from your previous job and make sure you are keeping your domain objects filled with business logic encapsulated behind good interfaces and ditto for your service-layer.
Private members are still incredibly important. Controlling access to internal object data is always good, and shouldn't be ignored.
Many times private methods I've found to be overkill. Most of the time, if the work you're doing is important enough to break out, you can refactor it in such a way that either a) the private method is trivial, or b) is an integral part of other functions.
In addition, with unit testing, having many methods private makes it very hard to unit test. There are ways around that (making test objects friends, etc), but add difficulties.
I wouldn't discount private methods entirely though. Any time there's important, internal algorithms that really make no sense outside of the class there's no reason to expose those methods.
I think that encapsulation is still important, it helps more in libraries than anything imho. You can create a library that does X, but you don't need everyone to know how X was created. And if you wanted to create it more specifically to obfuscate the way you create X. The way I learned about encapsulation, I remember also that you should always define your variables as private to protect them from a data attack. To protect against a hacker breaking your code and accessing variables that they are not supposed to use.