Positive and negative sides of YAML and PROPERTIES config formats - spring-cloud

I'm working on spring cloud app and can not decide which way of storing config in consul kv would be better: whole YAML or single parameters? What positive and negative sides has each way?

It might be easier to store the complete file in Consul to make portability easier, in case you want to move from Consul to something else in the future.
AFAIK, other than that there are no up- or downsides to it really.

Related

Is there a good way to set dynamic labels for k8s resources?

I'm attempting to add some recommended labels to several k8s resources, and I can't see a good way to add labels for things that would change frequently, in this case "app.kubernetes.io/instance" and "app.kubernetes.io/version". Instance seems like a label that should change every time a resource is deployed, and version seems like it should change when a new version is released, by git release or similar. I know that I could write a script to generate these values and interpolate them, but that's a lot of overhead for what seems like a common task. I'm stuck using Kustomize, so I can't just use Helm and have whatever variables I want. Is there a more straightforward way to apply labels like these?
Kustomize's commonLabels transformer is a common way to handle this, sometimes via a component. It really depends on your overall layout.

Distribute public key to all pods in all namespaces automatically

I have a public key that all my pods needs to have.
My initial thought was to create a ConfigMap or Secret to hold it but as far as I can tell neither of those can be used across namespaces. Apart from that, it's really boiler plate to paste the same volume into all my Deployments
So now I'm left with only, in my opinion, bad alternatives such as creating the same ConfigMap/Secret in all Namespaces and do the copy-paste thing in deployments.
Any other alternatives?
Extra information after questions.
The key doesn't need to be kept secret, it's a public key, but it needs to be distributed in a trusted way.
It won't rotate often but when it happens all images can't be re-built.
Almost all images/pods needs this key and there will be hundreds of images/pods.
You can use Kubernetes initializers to intercept object creation and mutate as you want. This can solve copy-paste in all your deployments and you can manage it from a central location.
https://medium.com/google-cloud/how-kubernetes-initializers-work-22f6586e1589
You will still need to create configmaps/secrets per namespace though.
While I don't really like the idea, one of the ways to solve it could be an init container that populates a volume with key(s) you need and then these volumes can be mounted in your containers as you see fit. That way it becomes independent of how you namespace stuff and relies only on how pods are defined/created.
That said, the Kubed mentioned by Ryan above sounds like more reasonable approach to a case like this one, and last but not least, something creates your namespaces after all, so having the creation of required elements of a namespace inside the same process sounds legit as well.

Identify crossroad nodes in openstreetmap data (.pbf)

does anybody know if there is a way I can seperate only the crossroad nodes which are included in a .pbf file? Is this clue (if a node is crossroad or not) included in this file's format?
Another option to solve your issue would be to use the new Atlas project.
As part of loading .osm.pbf files into in-memory Atlas files, it takes care of doing way sectioning on roads:
Load your pbf file into an Atlas. You will then have an Atlas object that you could save to a file and re-use.
Use the Atlas APIs to access all the intersections
In the end, each Atlas Node which is connected to more than 4 Edges on a two-way road or 2 Edges on a one way road would be a candidate if I understand your question correctly.
I'm not aware of a ready-made solution for this task, but it should still be relatively easy to do.
For parsing the .pbf file, I recommend using an existing library like Osmosis or Osmium. That way, you only need to implement the actual semantics of your use case.
The nodes themselves don't have any special attributes that mark them as crossroads. So instead, you will have to look at the ways containing the nodes.
Some considerations when implementing this:
You need to check the way's tags to find out whether it's a road. The most relevant key for that is highway. The details depend on your specific use case – for example, you need to decide whether footways, forestry tracks, driveways, ... should count.
What matters is the number of connecting way segments at a node, not the number of ways. For example, a node that is part of two ways may be a crossroads (if at least one of the ways continues beyond that node), or may not (if both ways start/end at that node).

Best practice to map OrientDB ORecordId onto RestFull friendly ID representation

We are looking into OrientDB as our persistency solution behind a restful web service, because a GraphDB would be a perfect match for our use case. One of the things we have noticed is that entities (both Vertex and Edges) are uniquely identified by a ORecordId, containing the '#${clusterId}:${clusterPosition}'. In a restful API, based on my personal experience from relational DB's, you typically have several solutions to identify entities uniquely, for example:
UUID's, generated in code and persisted on DB level
Long/Int values, generated on DB level incrementally
etc...
The problem is that the format "#${clusterId}:${clusterPosition}" is not really URL/REST friendly (example: .../api/user/[#${clusterId}:${clusterPosition}]/address). Do you have any advice/experience on how you would deal with this, keeping in mind that you need a bi-directional mapping between the ORecordId and the "RestFulFriendlyId"?
Any hints and best practices based on experience would be truly appreciated....
Best regards,
Bart
We're looking into using HashID. http://hashids.org/
There are some minor concerns we have still, but theoretically, HashID should get you a hashed Rid, which is also convertible, so it won't take up more storage space (like with a UUID). It will just take a small bit of CPU time.
Please note, this little tool is not in any way a true hash, as in, it makes it very hard to crack the hash. It is more about good obfuscation. If you are at all worried about the Rids being known, this isn't a proper solution.
Scott
Actually, I'd say the RIDs are very RESTful, if you do this:
.../domain.com/other-segments/{cluster}/{position}/...
Since clusters are a "superset" of a specific class (i.e. one class will have one or more clusters), this can be thought of as identifying the target data object by type/record. I'm not sure what backend you're using, but extracting those two URL segments and recombining them to #x:y should be a fairly simple (and maybe mostly automatic) task.

Namespaces in Redis?

Is it possible to create namespaces in Redis?
From what I found, all the global commands (count, delete all) work on all the objects. Is there a way to create sub-spaces such that these commands will be limited in context?
I don't want to set up different Redis servers for this purpose.
I assume the answer is "No", and wonder why wasn't this implemented, as it seems to be a useful feature without too much overhead.
A Redis server can handle multiple databases... which are numbered. I think it provides 32 of them by default; you can access them using the -n option to the redis-cli shell scripting command and by similar options to the connection arguments or using the "select()" method on its connection objects. (In this case .select() is the method name for the Python Redis module ... I presume it's named similarly for other libraries and interfaces.
There's an option to control how many separate databases you want in the configuration file for the Redis server daemon as well. I don't know what the upper limit would be and there doesn't seem to be a way to dynamically change that (in other words it seems that you'd have to shutdown and restart the server to add additional DBs). Also, there doesn't seem to be an away to associate these DB numbers with any sort of name nor to impose separate ACLS, nor even different passwords, to them. Redis, of course, is schema-less as well.
If you are using Node, ioredis has transparent key prefixing, which works by having the client prepend a given string to each key in a command. It works in the same way that Ruby's redis-namespace does. This client-side approach still puts all your keys into the same database, but at least you add some structure, and you don't have to use multiple databases or servers.
var fooRedis = new Redis({ keyPrefix: 'foo:' });
fooRedis.set('bar', 'baz'); // Actually sends SET foo:bar baz
If you use Ruby you can look at these gems:
https://github.com/resque/redis-namespace
https://github.com/jodosha/redis-store