Method alive(port) in RemoteActor does not take IP address as parameter.
It constructs internally a TcpService object which assings the IP address by calling Java's InetAddress.getLocalHost().getHostAddress() which returns the IP of the first available interface.
This is causing problems on machines with multiple network interfaces as it might return the wrong IP address.
Is there any possible way to overcome this issue.
Thanks.
Good question. It depends on how much you want to invest in a solution. I can imagine two ways:
1) The first way to change the default implementation is to write something better yourself. It's not that hard though since all the code for the remote actor library is available on GitHub.
My suggestion would be to re-implement parts of the TcpSerice class, especially line 73 to something like:
private val internalNode = {
val interfaces = NetworkInterface.getNetworkInterfaces()
val interface = ... // find the right interface here
val addresses = interface.getInetAddresses()
val address = ... // find the right address here
new Node(address, port)
}
This method also allows you to customize other stuff if you'd like to add or change something else.
2) The other (and probably simpler) method would be to avoid using the default implementation all together and instead use the very popular actor-framework akka. Akka provides a great deal of additional features, but also efficiency and robustness. If you look on their GitHub and the Server class, you'll see that the host is actually read from a global config-entry "hostname".
A detailed guide on how to manipulate the configs are given here. You should be able to use code similar to the above to find the right interface and address.
Hope that helps!
Related
I'm newbie to Scala, and I have years of experience programming in Java.
Usually there are two patterns passing some config:
Using a global object sounds like "ConfigManager". And every time I
needs a config I get directly from it.
Passing the config through parameter. The config param may exists in
many layers in the program.
I choose one pattern depends on how the config will be used when I'm writing Java.
But in Scala, many people talks about eliminating side effects. This makes me wonder if I should use the second patterns at any costs.
Which pattern is better in Scala?
Global objects are bad: https://softwareengineering.stackexchange.com/questions/148108/why-is-global-state-so-evil
Make each component take it's configuration (individual pieces) as constructor parameters (possibly with some defaults). That prevents the creation of invalid components or components that have not been configured.
You can collect the initial processing of configuration values in a single class to centralize configuration code and to fail-fast when things are missing. But don't make your components (classes needing the configuration) depend on a global object or take in an entire configuration as a parameter. Just what they need as constructor params.
Example:
// centralize the parsing of configuration
case class AppConfig (config: Config) {
val timeInterval = config.getInt("type_interval")
val someOtherSetting = config.getString("some_other_setting")
}
...
// don't depend on global objects
class SomeComponent (timeInterval: Int) {
...
}
object SomeApplication extends App {
val config = AppConfig(ConfigFactory.load())
val component = new SomeComponent(config.timeInterval)
}
Use global object (this object stores only read-only immutable data, so no issues) which loads configuration object and config variables at once. This has many benefits over loading the configuration deep inside the code.
object ConfigParams {
val config = ConfigFactory.load()
val timeInterval = config.getInt("time_interval")
....
}
Benefits:
Prevents runtime errors (Fail fast approach).
If you have miss spelt any property name your app fails during startup as you are trying to fetch the data eagerly. If this were to be deep inside the codebase then it would be hard to know and it fails when the control of the program goes to that line. So, it cannot be easily detected unless rigorous testing is done.
Central place for all configuration logic and configuration transformations if any.
This serves as a central place for all config logic. easy to change and maintain.
Transformations can be done without need for refactoring the code.
Maintainable and readable.
Easy refactoring.
Functional programming point of view
Yes, loading the config file eagerly is great idea from Fail fast point of view but its not a good functional programming practice.
But important thing is you are not mixing the side effect with any other logic and keeping it separate during the loading of the app. So, as you are isolating the side effect and side effecting at the starting of your project, this would not be a program.
Once the side effecting is done and app has started. Your pure code base will not effected from this and remains pure and clean. So, though it is side effecting, it is isolated and does not effect your codebase. Benefits you again from this are worth experiencing, So go ahead.
Let's assume we have a function that returns a list of apples in our warehouse:
List<Apple> getApples();
After some lifetime of the application we've found a bug - in rare cases clients of this function get intoxication because some of the apples returned are not ripe yet.
However another set of clients absolutely does not care about ripeness, they use this function simply to know about all available apples.
Naive way of solving this problem would be to add the 'ripeness' member to an apple and then find all places where ripeness can cause problems and put some checks.
const auto apples = getApples();
for (const auto& apple : apples)
if (apple.isRipe())
consume(apple)
However, if we correlate this new requirement of having ripe apples with the way class interfaces are usually designed, we might find out that we need new interface which is a subset of a more generic one:
List<Apple> getRipeApples();
which basically extends the getApples() interface by filtering the ones that are not ripe.
So the questions are:
Is this correct way of thinking?
Should the old interface (getApples) remain unchanged?
How will it handle scaling if later on we figure out that some customers are allergic to red/green/yellow apples (getRipeNonRedApples)?
Are there any other alternative ways of modifying the API?
One constraint, though: how do we minimize the probability of inexperienced/inattentive developer calling getApples instead of getRipeApples? Subclass the Apple with the RipeApple? Make a downcast in the getRipeApples?
A pattern found often with Java people is the idea of versioned capabilities.
You have something like:
interface Capability ...
interface AppleDealer {
List<Apples> getApples();
}
and in order to retrieve an AppleDealer, there is some central service like
public <T> T getCapability (Class<T> type);
So your client code would be doing:
AppleDealer dealer = service.getCapability(AppleDealer.class);
When the need for another method comes up, you go:
interface AppleDealerV2 extends AppleDealer { ...
And clients that want V2, just do a `getCapability(AppleDealerV2.class) call. Those that don't care don't have to modify their code!
Please note: of course, this only works for extending interfaces. You can't use this approach neither to change signatures nor to remove methods in existing interfaces.
Regarding your question 3/4: I go with MaxZoom there, but to be precise: I would very much recommend for "flags" to be something like List<String>, or List<Integer> (for 'real' int like flags) or even Map<String, Object>. In other words: if you really don't know what kind of conditions might come over time, go for interfaces that work for everything: like one where you can give a map with "keys" and "expected values" for the different keys. If you go for pure enums there, you quickly run into similar "versioning" issues.
Alternatively: consider to allow your client to do the filtering himself, using something like; using Java8 you can think of Predicates, lambdas and all that stuff.
Example:
Predicate<Apple> applePredicate = new Predicate<Apple>() {
#Override
public boolean test(Apple a) {
return a.getColour() == AppleColor.GoldenPoisonFrogGolden;
}
};
List<Apples> myApples = dealer.getApples(applePredicate);
IMHO creating new class/method for any possible Apple combination will result in a code pollution. The situation described in your post could be gracefully handled by introducing flags parameter :
List<Apple> getApples(); // keep for backward compatibility
List<Apple> getApples(FLAGS); // use flag as a filter
Possible flags:
RED_FLAG
GREEN_FLAG
RIPE_FLAG
SWEET_FLAG
So a call like below could be possible:
List<Apple> getApples(RIPE_FLAG & RED_FLAG & SWEET_FLAG);
that will produce a list of apples that are ripe, and red-delicious.
Typesafe Config documentation and library examples make a point that type safety can be achieved by making a configuration object or nested objects with getter methods mapped to Config.getType(key) methods.
If I wrap config calls in something like this:
class MyConfig (cfg:Config) {
val language = cfg.getString("app.language")
val database = new {
val url = cfg.getString("db.url")
val port = cfg.getInt("db.port")
...
}
}
I can do decent looking calls like config.database.url. Neat. (That dot looks so much greater than underscore)
What I don't quite get is how to allow modifying properties and saving them - quoting documentation, config is immutable. My attempts so far turned into either a gross spaghetti (closures with var config) or horrendous boilerplate (modifying a plain object and creating a new config from it to save), so I turned here for help.
I'd appreciate if someone showed me a good pattern for programmatically modifiable configuration using Typesafe Config.
It is possible that Typesafe Config just isn't a right tool for the job. I have little use for it's powerful merging and inheritance capabilities, instead I mostly need a simple, concise, unicode-friendly and type-safe way to load and store properties. I already do have one, a reflection-based java lib working with annotated POJOs. Doesn't seem to be a lot of variety with configuration libraries in Scala. I may have been too eager to throw away my trusty java tools.
I need to disable nagle algorithm in python2.6.
I found out that patching HTTPConnection in httplib.py that way
def connect(self):
"""Connect to the host and port specified in __init__."""
self.sock = socket.create_connection((self.host,self.port),
self.timeout)
self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, True) # added line
does the trick.
Obviously, I would like to avoid patching system lib if possible. So, the question is: what is right way to do such thing? (I'm pretty new to python and can be easily missing some obvious solution here)
Please note that if using the socket library directly, the following is sufficient:
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, True)
I append this information to the accepted answer because it satisfies the information need that brought me here.
It's not possible to change the socket options that httplib specifies, and it's not possible to pass in your own socket object either. In my opinion this sort of lack of flexibility is the biggest weakness of most of the Python HTTP libraries. For example, prior to Python 2.6 it wasn't even possible to specify a timeout for the connection (except by using socket.setdefaulttimeout() globally, which wasn't very clean).
If you don't mind external dependencies, it looks like httplib2 already has TCP_NODELAY specified.
You could monkey-patch the library. Because python is a dynamic language and more or less everything is done as a namespace lookup at runtime, you can simply replace the appropriate method on the relevant class:
:::python
import httplib
def patch_httplib():
orig_connect = httplib.HTTPConnection.connect
def my_connect(self):
orig_connect(self)
self.sock.setsockopt(...)
However, this is extremely error-prone as it means that your code becomes quite specific to a particular Python version, as these library functions and classes do change. For example, in 2.7 there's a _tunnel() method called which uses the socket, so you'd want to hook in the middle of the connect() method - monkey-patching makes that extremely tricky.
In short, I don't think there's an easy answer, I'm afraid.
...and why has the package this misleading name (I assumed it had something to do with JavaME or mobile/smart phones)?
I found no references on the internet about scala.mobile.Code or scala.mobile.Location at all nor did I manage to do anything with those classes except getting ClassCastExcetions or NoSuchMethodErrors.
Actually there is not even a single test against scala.mobile in the Scala's test tree which could help understanding that code.
The classes really smell like they were forgotten in the source tree a long time ago and got accidentally released since that.
Maybe I just missed something about them?
Update:
scala.mobile was removed in Scala 2.9.
I just checked the source code.
When Scala changed the name mangling of class files a few years ago and it seems people forgot to update these classes accordingly.
So my answer would be:
At least Location has no purpose, because it is not possible to get anything sensible out of it (except exceptions) and Code without Location is severely limited. It works though if you pass the class literal to Code directly:
import scala.mobile._
val c = new Code(classOf[scala.collection.mutable.StringBuilder])
c.apply[StringBuilder, String]("append")("Foo")
c.apply[String]("toString")() // returns "Foo"
c.apply[Int]("length")() // returns 3
Looks like yet-another implementation in the standard library of reflection-slightly-nicer.
The description of Location pretty much explains what that is about:
The class Location provides a create method to instantiate objects
from a network location by specifying the URL address of the jar/class file.
It might be used by remote actors. Maybe.
As for why it has this misleading name? Well, back in 2004 smart phones had really low penetration, so maybe the association wasn't all that strong.