Explaination RPC (remote procedure call) and RMI (remote method invocation) - rmi

Can someone explain this in a (better/simpler) way?
The remote procedure call (RPC) approach extends the common programming
abstraction of the procedure call to distributed environments, allowing a calling process to call a procedure in a remote node as if it is local.
Remote method invocation (RMI) is similar to RPC but for distributed objects, with added benefits in terms of using object-oriented programming concepts in
distributed systems and also extending the concept of an object reference to the
global distributed environments, and allowing the use of object references as
parameters in remote invocations.
I just don't understand the way it is explained...

Taking out the remote aspect, which is common to both, the difference is the difference between calling a function in a procedural language and calling a method in an OOP language.

Related

When should I use uvm_config_db?

The only use with uvm_config_db is when we have more than one testbench in our system?
I`ll be glad to have some explanation about this macro.
The uvm_config_db class (it's not a macro) has many uses besides multiple testbenches. The most common is sharing data from the top-level testbench module, like the location of interface instances, with the drivers and monitor classes that need to access the virtual interfaces.
It also gets used for communicating data between components and sequences, not just for passing values, but notifiers when data has been set.
You could certainly write a testbench without using the uvm_config_db, or the entire UVM for that matter. But that misses the whole point about writing testbenches for maintainability within the same testbench and reusability with other testbenches.

Why use interfaces

I see the benefit of interfaces, to be able to add new implementations via contract.
I dont see following problem:
Imagine you have interface DB with method "startTransaction".
Everything is fine you implement it in MySQL, PostgreSQL. But tomorrow you move to mongodb - then you have no transaction support.
What do you do?
1) Empty method - bad because you think you have transactions but u havent
2) Create your own - then you should have some parameters that will be different that regular "startTransaction" method.
And on top of that sometimes simple interfaces just doesnt work.
Example: You need additional parameters for different implementations.
If you're exposing the concept of transactions on your interface, then you must functionally support transactions no matter what, since users of the interface will logically depend on it. I.e., if a caller can start a transaction, then they expect to also be able to roll back a transaction of several queries. Since Mongo doesn't natively have any concept of rolling back transactions, there's one of two possibilities:
You implement the possibility of rolling back queries in code, emulating the functionality of transactions for a database which doesn't natively support it. (Whether that's even reliably possible in Mongo is a debatable topic.)
Your interface is working at the wrong level of abstraction. If your interface is promising functionality an implementation can't deliver, then either the interface or the implementation is unrealistic.
In practice, Mongo and SQL databases are such different beasts that you would either never make this kind of change without changing large parts of your business logic around it; or you specify your interface using an extremely minimal common-denominator interface only, most certainly not exposing technology-specific concepts on an abstract interface.
You are mostly correct, interfaces can be very useful, but also problematic in (fast) changing code, a best practice conserning interfaces is to keep them as small as possible.
When something can handle an transaction, create an interface only for handling an transaction. Split them up in as small as logically possible parts, in that way, when new classes emerge, you can assign them the specific interfaces that can determine their methods.
For the multiple parameter problem, this can indeed be problematic, see if you can determine if this specific value could be moved to a constructor, or indicates that the action that you are doing is indeed sightly different from the action that does not need this parameter.
I hope this helps, goodluck
You are right interfaces are used to add new implementations via contract but those implementations have to posses some similarity.
Let's take an example:
You cannot implement dog using human interface because dog is a living organism.
Same thing you are trying to do here.You are trying to implement a non-sql db using sql db implementation.

ADO.Net - Performance difference between Execute Reader and Execute Scalar

I know the purpose of Execute Reader and Execute Scalar. But Execute Reader can serve the purpose of Execute Scalar. So why to use Execute Scalar? Is there any performance difference between them?
Which is faster?
Thanks.
Difference depends on the IDbCommand implementation; often performance is the same when ExecuteScalar internally executes the same code as ExecuteReader: good example is SqlCommand: both methods call internal RunExecuteReader method, so there are no any difference in performance.
Many popular IDbCommand implementations work in the same manner as SqlClient (MySqlConnector, NpgSql, Microsoft.Data.Sqlite), but potentially it is possible to have ADO.NET connector that offers better performance for ExecuteScalar.
In short, if you call concrete class (say, SqlCommand) you can use either ExecuteReader or ExecuteScalar. If you use IDbCommand interface (say, in reusable library) and know nothing about the implementation usage of ExecuteScalar may give some performance benefits when used with connector that has specially optimized implementation.

Functional programming equivalents for the following [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to make the leap from functional programs for "hello world" equivalents to more real-world applications.
As I come from a Java world and have been exposed to all it's design patterns, my modeling process is still very Java oriented (e.g. I think in terms of *Managers, *Factory, *ClientFactory, *Handler etc.)
Changing my thought process in one shot, will be hard so I was hoping to get some pointers on how the following scenarios (described in a OO way) would be modeled in a functional language.
Examples in a functional language like Clojure/Haskell (or perhaps a hybrid like Scala) would be helpful.
Stateless Request handlers
E.g. is a Servlet. It is essentially a request handler with methods like doGet, doPost. How would one model such a class in a functional language?
Orchestrator classes
Such classes don't do anything by themselves, but just orchestrate the whole process or workflow. They offer multiple entry point APIs.
E.g. A OrderOrchestrator orchestrates a multiple step workflow starting with payment instrument validation, shopping cart management, payment, shipment initiation etc.
They might maintain some internal state of their own that is used by the different steps like payment, shipment etc.
ClientFactory pattern
Let's say you have written a client that for a LogService that is used by your client to log traffic data about their services. The client logs the data in S3 under buckets and accounts managed by you and you provide additional services like reporting and analytics on this data.
You don't want your customer to worry about providing the configuration information like AWS account info etc and hence you provide a ClientFactory that instantiates the appropriate client object based on whether this is for testing or production purposes without requiring the customer to provide any configuration. E.g. LogServiceClientFactory.getProdInstance() or LogServiceClientFactory.getTestInstance().
How is such a client modeled in a functional language?
Builder Pattern and other Fluent API designs
Client libraries often provide Builders to create objects with complex configuration. Sometimes APIs are also fluent to make it easy to create. An example of Fluent API is Mockito APIs : Mockito.when(A.get()).thenReturn(a) IIRC this is internally implemented by returning progressively restrictive Builders to allow the developer to write this code.
Is this a parallel to this in the functional programming world?
Datastore instances
Let's say that your codebase uses data stored in a ActiveUserRegistry from multiple places. You want only 1 instance of this registry to exist and have the entire code base access this registry. So you provide a ActiveUserRegistry.getInstance() that guarantees that all the code base accesses the instance (Assume that the instance is thread-safe etc.)
How is this managed in a functional setting? Do we have to make sure the same instance is passed around in the entire codebase?
Below is something to get started:
Stateless Request handlers
Clojure: Protocols
Haskell: Type classes
Orchestrator classes
State monad
ClientFactory pattern
LogServiceClientFactory is a Module and getProdInstance and getTestInstance being the functions in the module.
Builder Pattern and other Fluent API designs
Function composition
Datastore instances
Clojure: Function that uses an atom (to store and use the single instance)
Haskell: TVar,MVar
I'm not vary familiar with the many of these Java-style structures, but I'll take a stab at answering:
Stateless Request handlers
These exist in the functional world as well. Functions can fill this role easily, even with something as simple as a function from requests to responses. The Play Framework uses something more powerful, specifically a function from the Request to an Iteratee (type (RequestHeader) ⇒ Iteratee[Array[Byte], SimpleResult]). The Iteratee is an entity that can progressively consume input (Array[Byte]) as it is received and eventually produce the response (SimpleResult) to give back to the client. The request handler function is stateless and can be reused. The Iteratee is also stateless - the result of feeding it each chunk is actually to get a new Iteratee back, which is then fed the next chunk. (I'm oversimplifying really, it uses Futures, is entirely non-blocking, and has effective error handling - worth looking at to get a feel of the power and simplicity that functional-style code can bring to this problem).
Orchestrator classes
I'm not familiar with this pattern, so forgive me if this makes no sense. Having one giant mutable object that gets passed around is an anti-pattern. In functional code, there would be separate datatypes to represent the data that needs to passed between each stage of the process. These datatypes would be immutable.
As for things that organize other things, look at Akka and how one actor can monitor other actors underneath it, handling errors or restarting them as needed.
Builder Pattern and other Fluent API designs
Functional program has these and takes them to their logical conclusion. Functional code allows for very powerful DSLs. As for an example, check out a parser combinator library, either the one in the Scala standard library or one of the libraries for Haskell.
ClientFactory pattern and Datastore instances
I don't think this is any different in functional code. Either you have a singleton, or you do proper dependency injection. The Factory pattern is used in functional code as well, though first-class functions make many design patterns too trivial to be worth naming (from the GoF: Factory, Factory method, Command, and at least some instances of Strategy and Template can usually just be functions).
Have a look at Functional Programming Patterns in Scala and Clojure: http://pragprog.com/book/mbfpp/functional-programming-patterns-in-scala-and-clojure .
It should exactly have what you need.

Is it possible to use GWT EntityProxy WITH RPC calls?

I was reading about this EntityProxy feature in GWT 2.1+ and was wondering if you can use this proxy mechanism to avoid having to create DTOs and combine with regular RPC calls?
I have a command pattern which uses RPC mechanism, but as everybody knows, most of the time you have to round trip complex objects. But you usually end up coding a DTO which is usually a copy of your server side persistent object.
So can EntityProxy help you in this matter?
Thanks
EntityProxy is part of the RequestFactory system and cannot be used with GWT-RPC. The purpose of EntityProxy (and ValueProxy) is to avoid the need to code an entire DTO and all of the glue code that entails. The Request objects used by RequestFactory roughly approximate a command pattern, since multiple Request objects can be queued within a single RequestContext and evaluated with a single round-trip to the server.