We are running client-side MOXI on the same machine as our Tomcat servers, with MOXI currently talking to a cluster of membase servers on 3 different machines. The java clients talk to MOXI using spymemcached communicating to MOXI through data port 11211.
We are going to migrate to Couchbase now and from a development perspective, we'd like to use spring-data with couchbase but our infrastructure team want to keep MOXI on the client machines and only communicate via port 11211. It seems that when configuring the Couchbase client, this won't work as MOXI doesn't proxy the port 8901 (admin port) that the CouchbaseClient class uses to discover the Couchbase cluster. Does this mean that if we keep our current infrastructure that Spring Data is off the table?
I am new to this and have went through the Couchbase documentation, and it seems like what I want to do isn't possible, but would like to confirm this. Currently, to configure spring-data I am using this:
<couchbase:couchbase bucket="appsbucket" password="" host="localhost"/>
<couchbase:repositories base-package="com.pathto.myrepositories"/>
Localhost is where MOXI is running, but the assumption made by the the couchbase bean (the CouchbaseClient configuration) is that the couchbase admin port is available at port 8901. Of course, if instead of localhost I point it toward one of the servers hosting Couchbase, I don't have an issue other than our infrastructure team takes umbrage with this configuration.
Once you move to Couchbase with a smart client there isn't really much value in moxi; and in fact you'll be introducing an additional network hop (client -> moxi; moxi -> cluster).
You can think of the smart clients as conceptually having an embedded moxi - as the smart clients are aware of the cluster topology and know which node to communicate with to access a given document.
I suggest you take a look a the Deployment strategies section in the Couchbase admin guide which explains all this in more detail.
Related
My company is interested in using a stand-alone Service Fabric cluster to manage communications with robots. In our scenario, each robot would host its own rosbridge server, and our Service Fabric application would maintain WebSocket clients to each robot. I envision a stateful service partitioned along device ids which opens connections on startup. It should monitor connection health via heartbeats, pass messages from the robots to some protocol gateway service, and listen to other services for messages to pass to the robots.
I have not seen discussion of this style of external communications in the Service Fabric documentation - I cannot tell if this is because:
There are no special considerations for managing WebSockets (or any two-way network protocol) this way from Service Fabric. I've seen no discussion of restrictions and see no reason, conceptually, why I can't do this. I originally thought replication would be problematic (duplicate messages?), but since only one replica can be primary at any time this appears to be a non-issue.
Service Fabric is not well-suited to bi-directional communication with external devices
I would appreciate some guidance on whether this architecture is feasible. If not, discussion on why it won't work will be helpful. General discussion of limitations around bi-directional communication between Service Fabric services and external devices is welcome. I would prefer if we could keep discussion to stand-alone clusters - we have no plans to use Azure services at this time.
Any particular reason you want SF to host the client and not the other way around?
Doing the way you suggest, I think you will face big challenges to make SF find these devices on your network and keep track of them, for example, Firewall, IPs, NAT, planned maintenance, failures, connection issues, unless you are planning to do it by hand.
From the brief description I saw in the docs your provided about rosbridge server, I could understand that you have to host it on a Server(like you would with a service fabric service) and your devices would connect to it, in this case, your devices would have installed the ROS to make this communication.
Regarding your concerns about the communication, service fabric services are just executable programs you would normally run on your local machine, if it works there will likely work on service fabric environment on premise, the only extra care you have to worry is the external access to the cluster(if in azure or network configurations) and service discovery.
In my point of view, you should use SF as the central point of communication, and each device would connect to SF services.
The other approach would be using Azure IoT Hub to bridge the communication between both. There is a nice Iot Hub + Service Fabric Sample that might be suitable for your needs.
Because you want to avoid Azure, you could in this case replace IoT Hub with another messaging platform or implement the rosbridge in your service to handle the calls.
I hope I understood everything right.
About the obstacles:
I think the major issue here is that bi-directional connection can be established between service replica and the robot.
This has two major problems:
Only primary replica has write access - i.e. only one replica would be able to modify state. This issue hence could be mitigated by creating a separate partition for each robot (but please remember that you can't change partition count after the service was created) or by creating a separate service instance for each robot (this would allow you to dynamically add or remove robots but would require additional logic related to service discoverability).
The replica can be shutdown (terminated), moved to another node (shutdown and start of new replica) or even demoted (the primary replica get's demoted to secondary and another secondary replica get's promoted to primary) by various reasons. So the service code and robot communication code should be able to handle this.
About WebSockets
This looks possible by implementing custom ICommunicationListener and other things using WebSockets.
I am new to Spring Cloud. Currently, I want to build a new micro service based on Spring Cloud. It is very easy to build a new Eureka server. But my question is that how to make it high availability ? For example I create two Eureka server and a load balancer. When one of the Eureka server is down, the system still works well. But I don't know to to consist registered information in the two Eureka server.
I have already asked something similar in the spring cloud gitter channel.
Because of the CAP theorem, something as a distributes Service discovery has to decide, either to provide availability, or more consistency, with a trade off to the other one.
in short, by quoting Spencer Gibb:
Eureka favors availability over consistency
so it is very available, while registred services may be not acutal anymore.
As Spencer suggested, if consistency is something you need more then availability, try Consul together with spring cloud consul intead
If yes, I have the following questions:
After the pep proxy service is started up, should the context broker also be restarted (which I cannot)?
Should the IM and AM server be started up separately?
If I use an CEP instance to send events to the Orion Context Broker, is there any way to specify that the orion broker is secured? How to create users for the PEP proxy server? or is there any way for an cep instance to bypass the authentication and authorisation to Orion Context Broker?
Concerning 1: conceptually, PEP Proxies should be transparent to the components they are protecting, so you shouldn't have to make changes or restart your Context Broker.
Concerning 2: if by "started up separately" you mean they are different processes, independent from the PEP proxy, and should be started up separately, yes they are: they are independent of the use of a PEP proxy; it will be the PEP who contact both systems to do its job. If with "separately" you mean "in different machines", that's not really needed, you can have your own security machine with all the components, although that's not advisable.
Your third question will depend on what CEP are you going to use, as #fgalan pointed out. If the CEP supports the use of fiware authorization mechanisms, you can integrate it with the PEP-protected CB; if it does not, but your system doesn't require the users to directly interact with the CEP you can establish a secure connection between the Context Broker and the CEP independently (by using Security Groups or firewall rules) thus bypassing the PEP protection for your system's internal components (by using the secured internal ports, instead of the public ones).
Hope this solves some of your doubts.
I have 2 instances of Jboss servers running on eg: 127.0.0.1 and 127.0.0.2.
I have implemented Jboss load balancing, but am not sure how to achieve server failover. I do not have a webserver to monitor the heartbeat and hence using mod_cluster is out the question. Is there any way I can achieve failover using only the two available servers?
Any help would be appreciated. Thanks.
JBoss clustering automatically provides JNDI and EJB failover and also HTTP session replication.
If your JBoss AS nodes are in a cluster then the failover should just work.
The Documentation refers to an older version of JBoss (5.1) but it has clear descriptions of how JBoss clustering works.
You could spun up another instance to server as your domain controller, and the two instances you already have will be your hosts. Then you could go through the domain controller, and it will do the work for you. However, I haven't seen instances going down to often, it usually servers that do, and it looks like you are using just one server (i might be wrong) for both instances, so i would consider splitting it up.
I want to develop some local network services using apache thrift. There should be multiple services waiting for ONE master to connect to them and use them exclusively until the master releases them. The services are written in multiple languages.
The choice to use thrift was done because I need some simple remote procedure call mechansim for communication between the services that is fast and supports multiple languages. While thrift is good for RPC, I need some mechanism to locate the service TCP addresses and ports via some auto-discovery mechanism before to be able to connect the thrift server/clients with each other without hardwiring the addresses.
What are the possibilities for auto-discovering of such sort of services do I have?
Thanks!
There is nothing which you just plug into your scheme of things. You can build something similar using Apache ZooKeeper. Netflix's curator provides a good set of tools to build this, on top of ZooKeeper. See https://github.com/Netflix/curator