How does Etherenum get my deployed contract info? - ethernet

I am practice to Truffle to build my contracts.
When I finished my contract, open testrpc.
And use this command line as blow to deploy my contract.
truffle migrate
Etherenum can get my deployed contract info if I write some code in my app.js as blow:
if (typeof web3 !== 'undefined') {
App.web3Provider = web3.currentProvider;
web3 = new Web3(web3.currentProvider);
} else {
// set the provider you want from Web3.providers
App.web3Provider = new
Web3.providers.HttpProvider('http://localhost:8545');
web3 = new Web3(App.web3Provider);
}
$.when(
//load my contract json file
$.getJSON('Crowdsale.json', function(data)
{
var CrowdsaleTokenArtifact = data;
App.contracts.Crowdsale =
TruffleContract(CrowdsaleTokenArtifact);
// Set the provider for our contract.
App.contracts.Crowdsale.setProvider(App.web3Provider);
).then(function(){
// start do something
});
})
How does Etherenum get my contract info?
Because I don't tell Etherenum which one is my contract on the Etherenum net.
I just load my contract json file, and Etherenum can get my contract info.
Does it mean, if someone have my contract json file. They can do the same thing as I can do in the contract?
If there have another deployed contract that have the same name or same code structure like mine.
How does Etherenum recognize it?

You need both the ABI and the contract bytecode to deploy your contract.
The ABI is just a JSON description of all your function signatures, data types, events, etc. Think of it as the stub for your contract.
The bytecode is what gets executed during deployment that returns the bytecode for the contract.
Once a contract is deployed, you need the ABI and the address of the deployed contract to interact with it. Multiple deployments of the same contract will have different addresses.
The ABI doesn't reveal everything about how your contract works. Just the interface. However, it's common practice to publicly post your contract code so others can review it for security concerns and build trust with those who want to use your contract. Transparency with your code doesn't mean that anyone can use your contract. You still control who can access your contract and in what ways (usually through modifiers).

Related

Are there disadvantages to splitting a class's methods across different extensions?

I'm writing an API service (called ApiService) that handles calls to a server which create, read, update and delete many different types of data. Each type of data has its own set of calls e.g. postsList, postCreate, postRead, postUpdate and postDelete.
If there are say 7 different types of data each with at least 5 API calls then the ApiService class becomes large and cumbersome to work with (say >3000 lines). I want to find a way to segment the API service so it can be worked on more easily, but still allow for my app to find any given API call it needs to through only the base API service, rather than having to pull in a PostsApiService or CommentsApiService when it needs to make a particular API call.
A way to I've thought of which achieves what I'm looking is through extensions. I can keep each extension in its own file and use the part of and part directives so the extensions are included in the base file containing the original API service and when ApiService is used across the app I can access all the extension methods:
// post_api.dart
part of 'api_service.dart';
extension PostApi on ApiService {
Future<List<Posts>> posts() {
...
}
Future<List<Posts>> postCreate() {
...
}
...
}
// api_service.dart
part 'post_api.dart';
class ApiService {
...
}
I quite like this approach, it helps me reason about the different aspects of the Api much more easily than before but it does feel quite hacky. Additionally since the base ApiService doesn't have any methods of its own, I feel like I'm misusing darts extension functionality.
Does this way open me up to all sorts of unforeseen complications, performance issues or bad practises? Is there a better way to achieve the segmentation I'm looking for?

Versioning of REST webservices on top of gRPC

I've implemented an API service using gRPC with protocol buffers and then used grpc-gateway to expose that as a set of REST webservices.
Now I'm getting to the point where I'm having to maintain different versions of the API and I'm stuck.
In my proto file I have a handler like this defined for instance
rpc MerchantGet (MerchantRequest) returns (MerchantResponse) {
option (google.api.http) = {
get: "/v1.1.0/myapi/merchant/{MerchantID}"
};
}
In my Go code of course I then have a function, MerchantGet, to which GET actions to /v1.1.0/myapi/merchant/{MerchantID} are mapped.
Now, let's say I want to add more functionality to the MerchantGet method and release a new version. I intend to maintain backwards compatibility as per the Semantic Versioning Specification so if I understand correctly that means I can make underlying changes to my MerchantGet method and have it supersede the older method as long as it does not require different inputs from the 3rd party (MerchantRequest) or change the response sent to the 3rd party (MerchantResponse) other than by adding additional fields to the end of the response. (Correct me if I'm wrong in this assumption).
My question is, how do I write proto handlers to serve a method to endpoints of different versions? One option that came to mind would look something as follows:
rpc MerchantGet (MerchantRequest) returns (MerchantResponse) {
option (google.api.http) = {
get: "/v1.6.0/myapi/merchant/{MerchantID}"
additional_bindings {
get: "/v1.5.0/myapi/merchant/{MerchantID}"
}
additional_bindings {
get: "/v1.4.2/myapi/merchant/{MerchantID}"
}
additional_bindings {
get: "/v1.4.1/myapi/merchant/{MerchantID}"
}
additional_bindings {
get: "/v1.4.0/myapi/merchant/{MerchantID}"
}
additional_bindings {
get: "/v1.3.0/myapi/merchant/{MerchantID}"
}
additional_bindings {
get: "/v1.2.0/myapi/merchant/{MerchantID}"
}
additional_bindings {
get: "/v1.1.0/myapi/merchant/{MerchantID}"
}
};
}
But surely this can't be the idiomatic way of achieving this? It's certainly not very elegant at all as, with each new minor version or patch, I would have to extend these additional_bindings to each of my methods (above I'm just using one method as an example).
From the SemVer spec:
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
The only version that matters with respect to REST endpoint versioning is the MAJOR version, because all MINOR and PATCH changes must be backwards-compatible.
To answer your question:
Use only major version numbers in the REST URI. The rest is an implementation detail, from a REST standpoint.
So, your proto service will be:
rpc MerchantGet (MerchantRequest) returns (MerchantResponse) {
option (google.api.http) = {
get: "/v1/myapi/merchant/{MerchantID}"
};
}

How do I detect whether a mongodb serializer is already registered?

I have created a custom serializer for mongoDB.
I can register it and it works as expected.
However the my application sometimes throws an error because it tries to register the serializer twice.
How do I detect whether a serializer has already been registered and thus stop my application from registering a second time?
If you are using
BsonSerializer.RegisterSerializer(typeof (Type), typeSerializer);
you might get this error "there is already a serializer registered for type". Because you cannot register the same type of serializer 2 times. But you can write your own serializer and this serializer will work before default serializers.
For instance: if you want to use local DateTime instead of Utc which is default.
all you need to do is that writing a class implementing IBsonSerializationProviderand register this provider to BsonSerializer as soon as possible!
here is the sample code.
public class LocalDateTimeSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
return type == typeof(DateTime) ? DateTimeSerializer.LocalInstance : null;
}
}
and to be able to register
BsonSerializer.RegisterSerializationProvider(new LocalDateTimeSerializationProvider());
I hope this helps, you can also read the original documentation in here
this .net driver version of mongodb is 2.4!
TL;DR: Ig you are lazy, use BsonSerializer.LookupSerializer or BsonMemberMap.GetSerializer. To do it right, make sure the registration code is called once and only once.
The best approach to avoid this is to make sure the serializer is registered only once. It's a good idea to have some global startup code that registers anything that is global to the application once, and only once. That includes stuff like dependency injector configuration, tools like automapper and the mongodb driver. If you call this code only once and from a single point in code, you don't need to worry about thread safety, dead locks or similar troubles.
The MongoDB driver configuration settings are thread-safe, but don't assume that this is true for all software packages that you might need to configure. Also, locking can be very expensive performance wise if your code is multi-threaded, for instance in a web-application. Last but not least, that lookup you're doing might not be trivial in the first place, because some methods need to walk an entire inheritance tree.

Utilizing RijndaelManaged, Enterprise Library and Autofac together

I'm newly experimenting with the cryptography application block while using Autofac as the container.
As a result, I'm using the nuget package EntLibContrib 5.0 - Autofac Configurator.
With the DPAPI Symmetric Crypto Provider, I was able to encrypt/decrypt data just fine.
However, with RijndaelManaged, I receive an ActivationException:
Microsoft.Practices.ServiceLocation.ActivationException: Activation error occured while trying to get instance of type ISymmetricCryptoProvider, key "RijndaelManaged" ---> Autofac.Core.Registration.ComponentNotRegisteredException: The requested service 'RijndaelManaged (Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.ISymmetricCryptoProvider)' has not been registered. To avoid this exception, either register a component to provide the service, check for service registration using IsRegistered(), or use the ResolveOptional() method to resolve an optional dependency.
Per instructions here: http://msdn.microsoft.com/en-us/library/ff664686(v=pandp.50).aspx
I am trying to inject CryptographyManager into MyService.
My bootstrapping code looks like this:
var builder = new ContainerBuilder();
builder.RegisterEnterpriseLibrary();
builder.RegisterType<MyService>().As<IMyService>();
_container = builder.Build();
var autofacLocator = new AutofacServiceLocator(_container);
EnterpriseLibraryContainer.Current = autofacLocator;
App.config has this info defined for symmetricCryptoProviders:
name: RijndaelManaged
type: Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.HashAlgorithmProvider, Microsoft.Practices.EnterpriseLibrary.Security.Cryptography, Version=5.0.505.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
algorithmType:System.Security.Cryptography.RijndaelManaged
protectedKeyFilename:[path_to_my_key]
protectedKeyProtectionScope: LocalMachine
Anyone have experience in this combination of technologies?
After some testing, I believe I may go with a Unity container instead, since I have no preference in IOC containers other than whatever I use should integrate nicely with ASP.NET MVC3 and http-hosted WCF services.
My bootstrapping code then becomes more simple:
var container = new UnityContainer()
.AddNewExtension<EnterpriseLibraryCoreExtension>();
container.RegisterType<IMyService, MyService>();
I actually wrote the Autofac EntLib configurator (with some help from some of the P&P folks). It's been tested with the exception handling block and logging block, but I haven't tried it with the cryptography stuff.
EntLib has an interesting thing where it sometimes requires registered services to be named, and I'm guessing from the exception where it says...
type ISymmetricCryptoProvider, key "RijndaelManaged"
...I'm thinking EntLib wants you to register a named service, like:
builder.Register(c =>
{
// create the HashAlgorithmProvider using
// RijndaelManaged algorithm
})
.Named<ISymmetricCryptoProvider>("RijndaelManaged");
I'm sort of guessing at the exact registration since, again, I've not got experience with it or tested it, but the idea is that EntLib is trying to register a named service whereas the actual service isn't getting registered with the name.
The RegisterEnterpriseLibrary extension basically goes through and tries to use the same algorithm that Unity uses to do the named/unnamed registrations. I'm guessing you've encountered an edge case where something's not getting handled right. EntLib is pretty well tied to Unity, even if they did try to abstract it away.
If you're not tied to Autofac, Unity is going to be your lowest-friction path forward. I like the ease of use and more lightweight nature of Autofac, and my apps are tied to it, so I needed everything to work that way; if you don't have such an affinity, might be easier to just use Unity.
Sorry that's not a super answer. EntLib wire-up in IoC is a really complex beast.

How to version REST URIs

What is the best way to version REST URIs? Currently we have a version # in the URI itself, ie.
http://example.com/users/v4/1234/
for version 4 of this representation.
Does the version belong in the queryString? ie.
http://example.com/users/1234?version=4
Or is versioning best accomplished another way?
Do not version URLs, because ...
you break permalinks
The url changes will spread like a disease through your interface. What do you do with representations that have not changed but point to the representation that has? If you change the url, you break old clients. If you leave the url, your new clients may not work.
Versioning media types is a much more flexible solution.
Assuming that your resource is returning some variant of application/vnd.yourcompany.user+xml all you need to do is create support for a new application/vnd.yourcompany.userV2+xml media type and through the magic of content negotiation your v1 and v2 clients can co-exist peacefully.
In a RESTful interface, the closest thing you have to a contract is the definition of the media-types that are exchanged between the client and the server.
The URLs that the client uses to interact with the server should be provided by the server embedded in previously retrieved representations. The only URL that needs to be known by the client is the root URL of the interface. Adding version numbers to urls only has value if you construct urls on the client, which you are not suppose to do with a RESTful interface.
If you need to make a change to your media-types that will break your existing clients then create a new one and leave your urls alone!
And for those readers currently saying that this makes no sense if I am using application/xml and application/json as media-types. How are we supposed to version those? You're not. Those media-types are pretty much useless to a RESTful interface unless you parse them using code-download, at which point versioning is a moot point.
I would say making it part of the URI itself (option 1) is best because v4 identifies a different resource than v3. Query parameters like in your second option can be best used to pass-in additional (query) info related to the request, rather than the resource.
Ah, I'm putting my old grumpy hat on again.
From a ReST perspective, it doesn't matter at all. Not a sausage.
The client receives a URI it wants to follow, and treats it as an opaque string. Put whatever you want in it, the client has no knowledge of such a thing as a version identifier on it.
What the client knows is that it can process the media type, and I'll advise to follow Darrel's advice. Also I personally feel that needing to change the format used in a restful architecture 4 times should bring huge massive warning signs that you're doing something seriously wrong, and completely bypassing the need to design your media type for change resiliance.
But either way, the client can only process a document with a format it can understand, and follow links in it. It should know about the link relationships (the transitions). So what's in the URI is completely irrelevant.
I personally would vote for http://localhost/3f3405d5-5984-4683-bf26-aca186d21c04
A perfectly valid identifier that will prevent any further client developer or person touching the system to question if one should put v4 at the beginning or at the end of a URI (and I suggest that, from the server perspective, you shouldn't have 4 versions, but 4 media types).
You should NOT put the version in the URL, you should put the version in the Accept Header of the request - see my post on this thread:
Best practices for API versioning?
If you start sticking versions in the URL you end up with silly URLs like this:
http://company.com/api/v3.0/customer/123/v2.0/orders/4321/
And there are a bunch of other problems that creep in as well - see my blog:
http://thereisnorightway.blogspot.com/2011/02/versioning-and-types-in-resthttp-api.html
These (less-specific) SO questions about REST API versioning may be helpful:
Versioning RESTful services?
Best practices for web service REST API versioning
There are 4 different approaches to versioning the API:
Adding version to the URI path:
http://example.com/api/v1/foo
http://example.com/api/v2/foo
When you have breaking change, you must increment the version like: v1, v2, v3...
You can implement a controller in you code like this:
#RestController
public class FooVersioningController {
#GetMapping("v1/foo")
public FooV1 fooV1() {
return new FooV1("firstname lastname");
}
#GetMapping("v2/foo")
public FooV2 fooV2() {
return new FooV2(new Name("firstname", "lastname"));
}
Request parameter versioning:
http://example.com/api/v2/foo/param?version=1
http://example.com/api/v2/foo/param?version=2
The version parameter can be optional or required depending on how you want the API to be used.
The implementation can be similar to this:
#GetMapping(value = "/foo/param", params = "version=1")
public FooV1 paramV1() {
return new FooV1("firstname lastname");
}
#GetMapping(value = "/foo/param", params = "version=2")
public FooV2 paramV2() {
return new FooV2(new Name("firstname", "lastname"));
}
Passing a custom header:
http://localhost:8080/foo/produces
With header:
headers[Accept=application/vnd.company.app-v1+json]
or:
headers[Accept=application/vnd.company.app-v2+json]
Largest advantage of this scheme is mostly semantics: You aren’t cluttering the URI with anything to do with the versioning.
Possible implementation:
#GetMapping(value = "/foo/produces", produces = "application/vnd.company.app-v1+json")
public FooV1 producesV1() {
return new FooV1("firstname lastname");
}
#GetMapping(value = "/foo/produces", produces = "application/vnd.company.app-v2+json")
public FooV2 producesV2() {
return new FooV2(new Name("firstname", "lastname"));
}
Changing Hostnames or using API Gateways:
Essentially, you’re moving the API from one hostname to another. You might even just call this building a new API to the same resources.
Also,you can do this using API Gateways.
I wanted to create versioned APIs and I found this article very useful:
http://blog.steveklabnik.com/posts/2011-07-03-nobody-understands-rest-or-http
There is a small section on "I want my API to be versioned". I found it simple and easy to understand. The crux is to use Accept field in the header to pass version information.
If the REST services require authentication before use, you could easily associate the API key/token with an API version and do the routing internally. To use a new version of the API, a new API key could be required, linked to that version.
Unfortunately, this solution only works for auth-based APIs. However, it does keep versions out of the URIs.
If you use URIs for versioning, then the version number should be in the URI of the API root, so every resource identifier can include it.
Technically a REST API does not break by URL changes (the result of the uniform interface constraint). It breaks only when the related semantics (for example an API specific RDF vocab) changes in a non backward compatible way (rare). Currently a lot of ppl do not use links for navigation (HATEOAS constraint) and vocabs to annotate their REST responses (self-descriptive message constraint) that's why their clients break.
Custom MIME types and MIME type versioning does not help, because putting the related metadata and the structure of the representation into a short string does not work. Ofc. the metadata and the structure will frequently change, and so the version number too...
So to answer your question the best way to annotate your requests and responses with vocabs (Hydra, linked data) and forget versioning or use it only by non backward compatible vocab changes (for example if you want to replace a vocab with another one).
I'd include the version as an optional value at the end of the URI. This could be a suffix like /V4 or a query parameter like you've described. You might even redirect the /V4 to the query parameter so you support both variations.
I vote up for doing this in mime type but not in URL.
But the reason is not the same as other guys.
I think the URL should be unique (excepting those redirects) for locating the unique resource.
So, if you accept /v2.0 in URLs, why it is not /ver2.0 or /v2/ or /v2.0.0? Or even -alpha and -beta? (then it totally becomes the concept of semver)
So, the version in mime type is more acceptable than the URL.