Are queries over Bolt visible in logs? - bolt

Are queries over Bolt visible in logs or other places and is it possible to secure them against unintended viewers who might intercept them?

Queries are visible inside logs in the case Memgraph is configured with the --log-level=DEBUG or --log-level=TRACE flags (by default, the log level is INFO -> queries are not visible). These files have to be secured on the OS level (unauthorized users shouldn't have access to the machine running Memgraph).
Memgraph supports secure Bolt connections (SSL connections), so the communication between clients and the Memgraph server can be encrypted (flags to achieve that are --bolt-key-file and --bolt-cert-file).

Related

Is the PEP proxy service ready to be used to secure orion context broker?

If yes, I have the following questions:
After the pep proxy service is started up, should the context broker also be restarted (which I cannot)?
Should the IM and AM server be started up separately?
If I use an CEP instance to send events to the Orion Context Broker, is there any way to specify that the orion broker is secured? How to create users for the PEP proxy server? or is there any way for an cep instance to bypass the authentication and authorisation to Orion Context Broker?
Concerning 1: conceptually, PEP Proxies should be transparent to the components they are protecting, so you shouldn't have to make changes or restart your Context Broker.
Concerning 2: if by "started up separately" you mean they are different processes, independent from the PEP proxy, and should be started up separately, yes they are: they are independent of the use of a PEP proxy; it will be the PEP who contact both systems to do its job. If with "separately" you mean "in different machines", that's not really needed, you can have your own security machine with all the components, although that's not advisable.
Your third question will depend on what CEP are you going to use, as #fgalan pointed out. If the CEP supports the use of fiware authorization mechanisms, you can integrate it with the PEP-protected CB; if it does not, but your system doesn't require the users to directly interact with the CEP you can establish a secure connection between the Context Broker and the CEP independently (by using Security Groups or firewall rules) thus bypassing the PEP protection for your system's internal components (by using the secured internal ports, instead of the public ones).
Hope this solves some of your doubts.

Haproxy continue to route sessions to a backend marked as down

I'm using HaProxy 1.5.0 in front of a 3-node MariaDB cluster.
HaProxy checks with a custom query/xinet service that each DB node has a synced status.
When for some reason the check fails (the node for instance gets desynced or becomes donor), the corresponding backend in haproxy is marked down, but I can still see active sessions on it in haproxy statistics console, and queries in DB process list (this is possible because MariaDB service is still up and accepts queries, even though the cluster status is not synced).
I was wondering why HaProxy does not close active connections when a backend becomes down, and dispatch them to other active backends?
I get this expected behaviour when the MariaDB service is fully stopped on a given node (no session possible).
Is there a specific option to allow this? Option redispatch seemed promising but it applies when connections are closed (not in my case) and it's already active in my config.
Thanks for your help.
Here are the settings we're using to get the same behavior:
default-server port 9200 [snip] on-marked-down shutdown-sessions
The on-marked-down shutdown-sessions option, that tells HAProxy to close all connections to the backend server when it is marked as down.
Of course, you can add it to every individual server if you're not using a default-server directive :)

How to make restfull service truely Highly Available with Hardware load balancer

When we have a cluster of machines behind a load balancer (lb), generally hardware load balancer have persistent connections,
Now when we need to deploy some update on all machines (rolling update), the way to do is by bringing one machine Out of rotation, looks for no request sent to that server via lb. When the app reached no request state then update manually.
With 70-80 servers in picture this becomes very painful.
Can someone have a better way of doing it.
70-80 servers is a very horizontally scaled implementation... good job! Better is a very relative term, hopefully one of these suggestions count as "better".
Implement an intelligent health check for the application with the ability to adjust the health check while the application is running. What we do is have the health check start failing while the application is running just fine. This allows the load balancer to automatically take the system out of rotation. Our stop scripts query the load balancer to make sure that it is out of rotation and then shuts down normally which allows the existing connections to drain.
Batch multiple groups of systems together. I am assuming that you have 70 servers to handle peak load. This means that you should be able to restart several at a time. A standard way to do this is to implement a simple token granting service with a maximum of 10 tokens. Have your shutdown scripts checkout a token before continuing.
Another way to do this is with blue/green deploys. That means that you have an entire second server farm and then once the second server farm is updated switch load balancing to point to the new server farm.
This is an alternate to option 3. Install both versions of the app on the same servers and then have an internal proxy service (like haproxy) switch the connections between the version of the app that is deployed. For example:
haproxy listening on 8080
app version 0.1 listening on 9001
app version 0.2 listening on 9002
Once you are happy with the deploy of app version 0.2 switch haproxy to send traffic to 9002. When you release version 0.3 then switch load balancing back to 9001 etc.

Is a good idea to set "leastconn" as load balancing method in HAproxy to handle BOSH connections?

I have an HAproxy instance used as load balancer of BOSH (http-bind, http://xmpp.org/extensions/xep-0206.html) servers. It was running with "roundrobin" load balancing method, but I experimented some issues, when some instances go down, all the connections are redistributed to the active instances. When the death nodes come up again, they don't have the same amount of connections that the other instances, and they aren't using the same resources. If other instances go down, the sessions will be redistributed again and some servers will be overloaded and some other that are running in their limits go down, so all the service is interrupted, and I need to restart all instances at the same time in order try that the the sessions could be evenly redistributed.
I was reading about how can I configure a BOSH load balancing using HAproxy and I found this book: "Professional XMPP Programming with JavaScript and jQuery". In this book the author recommends that we can use "leastconn" as balance method for Haproxy.
The HAproxy documentation says that we shouldn't use "leastconn" with HTTP connections, but it says that we should use it where very long sessions are expected.
I think that this balancing method can help with the issue when the servers go down, because it will redistribute the sessions equally in the active nodes, and when the instance is up again, all the new sessions will go to this instance until it has the same amount of sessions that the other servers.
Has anyone some experience in this kind of configuration? What HApoxy settings or tuning do you recommend me in order to balance BOSH connections?
If your sessions are long, and they may be when I read SMPP, then leastconn will provide a better load-balancing than roundrobin.
Roundrobin works well for very short connections.
cheers

MSMQ redundancy

I'm looking into WCF/MSMQ.
Does anyone know how one handles redudancy with MSMQ? It is my understanding that the queue sits on the server, but what if the server goes down and is not recoverable, how does one prevent the messages from being lost?
Any good articles on this topic?
There is a good article on using MSMQ in the enterprise here.
Tip 8 is the one you should read.
"Using Microsoft's Windows Clustering tool, queues will failover from one machine to another if one of the queue server machines stops functioning normally. The failover process moves the queue and its contents from the failed machine to the backup machine. Microsoft's clustering works, but in my experience, it is difficult to configure correctly and malfunctions often. In addition, to run Microsoft's Cluster Server you must also run Windows Server Enterprise Edition—a costly operating system to license. Together, these problems warrant searching for a replacement.
One alternative to using Microsoft's Cluster Server is to use a third-party IP load-balancing solution, of which several are commercially available. These devices attach to your network like a standard network switch, and once configured, load balance IP sessions among the configured devices. To load-balance MSMQ, you simply need to setup a virtual IP address on the load-balancing device and configure it to load balance port 1801. To connect to an MSMQ queue, sending applications specify the virtual IP address hosted by the load-balancing device, which then distributes the load efficiently across the configured machines hosting the receiving applications. Not only does this increase the capacity of the messages you can process (by letting you just add more machines to the server farm) but it also protects you from downtime events caused by failed servers.
To use a hardware load balancer, you need to create identical queues on each of the servers configured to be used in load balancing, letting the load balancer connect the sending application to any one of the machines in the group. To add an additional layer of robustness, you can also configure all of the receiving applications to monitor the queues of all the other machines in the group, which helps prevent problems when one or more machines is unavailable. The cost for such queue-monitoring on remote machines is high (it's almost always more efficient to read messages from a local queue) but the additional level of availability may be worth the cost."
Not to be snide, but you kind of answered your own question. If the server is unrecoverable, then you can't recover the messages.
That being said, you might want to back up the message folder regularly. This TechNet article will tell you how to do it:
http://technet.microsoft.com/en-us/library/cc773213.aspx
Also, it will not back up express messages, so that is something you have to be aware of.
If you prefer, you might want to store the actual messages for processing in a database upon receipt, and have the service be the consumer in a producer/consumer pattern.