What's a unique, persistent alternative to MAC address? - virtualization

I need to be able to repeatably, non-randomly, uniquely identify a server host, which may be arbitrarily virtualized and over which I have no control.
A MAC address doesn't work because in some virtualized environments, network interfaces don't have hardware addresses.
Generating a state file and saving it to disk doesn't work because the virtual machine may be cloned, thus duplicating the file.
The server's SSH host keys may be a candidate. They can be cloned like a state file, but in practice they generally aren't because it's such a security problem that it's a mistake not often made.
There's also /var/lib/dbus/machine-id, but that's dependent on dbus. (Thanks Preetam).
There's a cpuid but that's apparently deprecated. (Thanks Bruno Aguirre on Twitter).
Hostname is worth considering. Many systems like Chef already require unique hostnames. (Thanks Alfie John)
I'd like the solution to persist a long time, and certainly across server reboots and software restarts. Ultimately, I also know that users of my software will deprecate a host and want to replace it with another, but keep continuity of the data associated with it, so there are reasons a UUID might be considered mutable over the long term, but I don't particularly want a host to start considering itself to be unknown and re-register itself for no reason.
Are there any alternative persistent, unique identifiers for a host?

It really depends on what is meant by "persistent". For example, two VMs can't each open the same network socket to you, so even if they are bit-level clones of each other it is possible to tell them apart.
So, all that is required is sufficient information to tell the machines apart for whatever the duration of the persistence is.
If the duration of the persistence is the length of a network connection, then you don't need any identifiers at all -- the sockets themselves are unique.
If the persistence needs to be longer -- say, for the length of a boot -- then you can regenerate UUIDs whenever the system boots. (Note that a VM that is cloned would still have to reboot, unless you're hot-copying it.)
If it needs to be longer than that -- say, indefinitely -- then you can generate a UUID identifier on boot and save it to disk, but only use this as part of the identifying information of the machine. If the virtual machine is subsequently cloned, you will know this since you will have two machines reporting the same ID from different sources -- for instance, two different network sockets, different boot times, etc. Since you can tell them apart, you have enough information to differentiate the two cloned machines, which means you can take a subsequent action that forces further differentiation, like instructing each machine to regenerate its state file.
Ultimately, if a machine is perfectly cloned, then by definition you cannot tell which one was the "real one" to begin with, only that there are now two distinguishable machines.
Implying that you can tell the difference between the "real one" and the "cloned one" means that there is some state you can use to record the difference between the two, like the timestamp of when the virtual machine itself was created, in which case you can incorporate that into the state record.

It looks like simple solutions have been ruled out.
So that could lead to complex solutions, like this protocol:
- Client sends tuple [ MAC addr, SSH public host key, sequence number ]
- If server receives this tuple as expected, server and client both increment sequence number.
- Otherwise server must determine what happened (was client cloned? did client move?), perhaps reaching a tentative conclusion and alerting a human to verify it.

I don't think there is a straight forward "use X solution" based on the info available but here are some general suggestions that might get you to a better spot.
If cloning from a "gold image" consider using some "first boot" logic to generate a unique ID. Config management systems like Chef, Puppet or Cf-engine provide some scaffolding to achieve this.
Consider a global state manager like zookeeper. Specifically its atomic counter functionality. Same system could get new ID over time, but it would be unique.
Also this stack overflow might give you some other direction. It references Twitter's approach to a similar problem.

If I understand correctly, you want a durable, globally unique identifier under these conditions:
An OS installation that can be cloned while running, so any state inside the VM won't work, and
Could be running in an arbitrary virtualization environment, so any state outside the VM won't work.
I realize this doesn't directly answer your question, but it really seems like either the design or the constraints need some substantial adjustment to accomodate a solution.

Related

Microservice Adapter. One for many or many / countries to one / country. Architectural/deployment decision

Say, I have System1 that connect to System 2 through the adapter-microservice between them.
System1 -> rest-calls --> Adapter (converts request-response + some extra logic, like validation) -> System2
System1 is more like a monolith, exists for many countries (but it may change).
Question is: From the perspective of MicroService architecture and deployment, should the Adapter be one per country. Say Adapter-Uk, Adappter-AU, etc. Or it should be just the Adapter that could handle many countries at the same time?
I mean:
To have a single system/adapter-service :
Advantage: is having one code-base in or place, the adaptive code-logic between countries in 90 % are the same. Easy to introduce new changes.
Disadvantage: once we deploy the system, and there is a bug, it could affect many countries at the same time. Not safe.
To have a separate system:
Disadvantage: once some generic change has been introduced to one system, then it should be "copy-pasted" for all other countries/services. Repetitive, not smart.. work, from developer point of view.
Advantage:
Safer to change/deploy.
Q: what is a preferable way from the point of view of microservice architecture?
I would suggest the following:
the adapter covers all countries in order to maintain single codebase and improve code reusability
unit and/or integration tests to cope with the bugs
spawn multiple identical instances of the adapter with load balancer in front
Since Prabhat Mishra asked in the comment.
After two years.. (took some time to understand what I have asked.)
Back then, for me was quite critical to have a resilient system, i.e. if I change a code in one adapter I did not want all my countries to go down (it is SAP enterprise system, millions clients). I wanted only once country to go down (still million clients, but fewer millions :)).
So, for this case: i would create many adapters one per country, BUT, I would use some code-generated common solution to create them, like common jar - so I would would not repeat my infrastructure or communication layers. Like scaffolding thing.
country-adapter new "country-1"
add some country specific code (not changing any generated one (less code to repeat, less to support))
Otherwise, if you feel safe (like code reviewing your change, making sure you do not change other countries code), then 1 adapter is Ok.
Another solution, is to start with 1 adapter, and split to more if it is critical (feeling not safe about the potential damage / cost if fails).
In general, seems, all boils down to the same problem, which is: WHEN to split "monolith" to pieces. The answer is always: when it causes problems to be as big as it is. But, if you know your system well, you know in advance that WHEN is now (or not).

Is it correct to use TargetSubID as a flag for test data in FIX protocol?

We are currently working on a FIX connection, whereby data that should only be validated can be marked. It has been decided to mark this data with a specific TargetSubID. But that implies a new session.
Let's say we send the messages to the session FIX.4.4:S->T. If we then get a message that should only be validated with TargetSubID V, this implies the session FIX.4.4:S->T/V. If this Session is not configured, we get the error
Unknown session: FIX.4.4:S->T/V
and if we explicitly configure this session next to the other, there is the error
quickfix.Session – [FIX/Session] Disconnecting: Encountered END_OF_STREAM
what, as bhageera says, is that you log in with the same credentials.
(...) the counterparty I was connecting to allows only 1 connection
per user/password (i.e. session with those credentials) at a time.
I'm not a FIX expert, but I'm wondering if the TargetSubID is not just being misused here. If not, I would like to know how to do that. We develop the FIX client with camel-quickfix.
It depends a lot on what you system is like and what you want to achieve in the end.
Usually the dimensions to assess are:
maximising the flexibility
minimising the amount of additional logic required to support the testing
minimising the risk of bad things happening on accidental connection from a test to a prod environment (may happen, despite what you might think).
Speaking for myself, I would not use tags potentially involved in the sesson/routing behavior for testing unless all I need is routing features and my system reliably behaves the way I expect (probably not your case).
Instead I would consider one of these:
pick something from a user defined range (5000-9999)
use one of symbology tags (say Symbol(55)) corrupted in some reversible way (say "TEST_VOD.L" in the tag 55 instead of "VOD.L")
A tag from a custom range would give a lot of flexibility, a corrupted symbology tag would make sure a test order would bounce if sent to prod by accident.
For either solution you may potentially need a tag-based routing and transformation layer. Both are done in couple of hours in generic form if you are using something Java-based (I'd look towards javax.scripting / Nashorn).
It's up to the counterparties - sometimes Sender/TargetSubID are considered part of the unique connection, sometimes they distinguish messages on one connection.
Does your library have a configuration option to exclude the sub IDs from the connection lookups? e.g. in QuickFix you can set the SessionQualifier.

consistent hashing on Multiple machines

I've read the article: http://n00tc0d3r.blogspot.com/ about the idea for consistent hashing, but I'm confused about the method on multiple machines.
The basic process is:
Insert
Hash an input long url into a single integer;
Locate a server on the ring and store the key--longUrl on the server;
Compute the shorten url using base conversion (from 10-base to 62-base) and return it to the user.(How does this step work? In a single machine, there is a auto-increased id to calculate for shorten url, but what is the value to calculate for shorten url on multiple machines? There is no auto-increased id.)
Retrieve
Convert the shorten url back to the key using base conversion (from 62-base to 10-base);
Locate the server containing that key and return the longUrl. (And how can we locate the server containing the key?)
I don't see any clear answer on that page for how the author intended it. I think this is basically an exercise for the reader. Here's some ideas:
Implement it as described, with hash-table style collision resolution. That is, when creating the URL, if it already matches something, deal with that in some way. Rehashing or arithmetic transformation (eg, add 1) are both possibilities. This means, naively, a theoretical worst case of having to hit a server n times trying to find an available key.
There's a lot of ways to take that basic idea and smarten it, eg, just search for another available key on the same server, eg, by rehashing iteratively until you find one that's on the server.
Allow servers to talk to each other, and coordinate on the autoincrement id.
This is probably not a great solution, but it might work well in some situations: give each server (or set of servers) separate namespace, eg, the first 16 bits selects a server. On creation, randomly choose one. Then you just need to figure out how you want that namespace to map. The namespaces only really matter for who is allowed to create what IDs, so if you want to add nodes or rebalance later, it is no big deal.
Let me know if you want more elaboration. I think there's a lot of ways that this one could go. It is annoying that the author didn't elaborate on this point; my experience with these sorts of algorithms is that collision resolution and similar problems tend to be at the very heart of a practical implementation of a distributed system.

Why do we need Operational Transformation for real-time collaboration?

Having seen apps like Google Docs and libraries like ShareJS and EtherPad Lite, I am pretty excited about real-time collaboration, and this seems to be implemented using a very complex technique known as Operational Transformation.
My question is perhaps somewhat odd: why is OT necessary?
What I mean is, we have very low latency on the web in most settings - with tools like Google Docs, ShareJS and EtherPad, changes are almost instantly reflected on connected clients.
Why the incredibly complex solution of resolving conflicts and keeping things synchronized on the server-side?
Being familiar with the command pattern and undo/redo, it seems to me a much simpler solution would be to simply implement every change to a document as a command with an equivalent undo-command.
Let clients submit serialized commands when they make changes. Assign a serial number on the server-side to every received command. Distribute all commands applied to a document back to the clients, which also maintain a history of commands.
Each connected client receives back from the server all the commands applied to the document, now with serial numbers indicating the "correct" order, e.g. the order in which the commands were received by the server, and in which they were applied to the master document held by the server.
If a client was at command number 100, and submits a new command to the server that comes back as number 102, the client knows that it missed a command - it then simply applies the "undo" commands for the last command it submitted, applies command number 101, and then applies it's own command number 102 again, thus putting things back in order.
If it's behind by several commands, it simply rolls back as far as needed, then applies all missed commands, etc.
That sounds much simpler to me.
In what way is Operational Transformation better than that?

syslog - log line classifications

A very generic question; in the context of a programmer, with operational aspect of the process (program) in mind.
Is there any sort of best-practice / guide to classify messages, particularly in the context of SaaS / multi-tenancy (server) software environment, which would be generating errors and warnings due to user actions or misconfiguration. Due to the nature of the software, most modules that I am having to deal with, are stateless; i.e when an error happens due to user-error, it is quite hard to distinguish between that and an operational error (like network misconfiguration, etc).
What I want to know is from some of you experienced folks; what is the sensible logic to be employed here, in order to make it easy for the operations boys/girls to classify these messages, and identify problems?
Just three aspects from an admin and log analysis/classification perspective:
Make the tag field/program name configurable. Then one can configure multiple instances to use log tags like app/user_1, app/user_2 etc., allowing for fast and simple filters on the syslog level.
Structure you messages from left to right, so one can filter different categories of log lines with simple search patterns or regular expression. E.g. config error - cannot parse line 123 or runtime warning - lost connection to DB xyz
For very structured logs you might also take a look at the 'structured data' field in syslog-protocol. So far it is rarely used and without tool support, but it allows for application log messages with namespaces and very clear key-value-attributes.
Identify the servers and server types (name, ip address, etc.)
Classify by severity, make sure all the clocks are in synch in order
to have the message ordered correctly.
Put a message/error code to filter/create some rules in your monitoring tool.
Put a module (used if several modules on one server)
Put a category for addressing general services like networking, etc.
I guess you will gather the logs from the different machines with their syslog deamon to a central machine in charge of supervision/monitoring.
Most *nix processes log to syslog (or should at least) using a semi-standard format "Month Day 24H-Time host process_name[pid]: message". Syslog incorporates ways to indicate the message's severity, use them (though keep in mind that the severity is from the system's prospective, not the applications).
If message is a debugging problem then it's usually "Function_Name File_Name Line_No Error_Code Error_Desc"; otherwise the format of the message is entirely program dependent.
For multi-tenant systems it's pretty common for the "message" part to start with some form of tenant identification, followed by the actual log message.