No server chosen by WritableServerSelector from cluster description - mongodb

I am trying to insert my application logs into MongoDB. I have created a custom appender and have overridden the append method as follows:
public void append(LoggingEvent loggingEvent) {
Document doc = convertToMongoDocument(loggingEvent);
pushDocToDB(doc);
}
public Document convertToMongoDocument(LoggingEvent event) {
Document doc = new Document();
// will read from the actual logging event later
doc.append("logger", "logger");
doc.append("user", "user");
doc.append("message", "message");
doc.append("timestamp", "timestamp");
return doc;
}
public void pushDocToDB(Document docList) {
getCollection().insertOne(docList);
}
I keep running into the below error when trying to insert a document from a java client. I have not created any replica sets and am using a standalone instance of MongoDB in my local.
No server chosen by WritableServerSelector from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, all=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 1000000 ms before timing out
I am using mongo-java-driver - version 3.4.0 and jdk 1.7

Related

Spring data mongo resume token for MessageListenerContainer

Hi I am trying to implement MessageListener from mongo oplog and it should retrieve the document from last stopped streaming document.
Currently I have the code set up as below. Not sure how to retrieve resumeToken from last document and set it so that if listener application goes down and comes back online, it should read after the last read.
#Bean
MessageListenerContainer candidateMessageListenerContainer(MongoTemplate mongoTemplate, #Qualifier("candidateMessageListener") MessageListener documentMessageListener)
{
Executor executor = Executors.newSingleThreadExecutor();
MessageListenerContainer messageListenerContainer = new DefaultMessageListenerContainer(mongoTemplate, executor)
{
#Override
public boolean isAutoStartup()
{
return true;
}
};
ChangeStreamRequest<Candidate> request = ChangeStreamRequest.builder(documentMessageListener)
.collection("candidate") // The name of the collection to listen to , Do not specify the default listening database
.filter(newAggregation(match(where("operationType").in("insert", "update", "replace", "delete")))) // Filter the types of operations that need to be monitored , You can specify filter conditions according to your needs
.fullDocumentLookup(FullDocument.UPDATE_LOOKUP)
// When not set , When the document is updated , Only information about the changed fields will be sent , Set up UPDATE_LOOKUP All information of the document will be returned
.build();
messageListenerContainer.register(request, Candidate.class);
return messageListenerContainer;
}
You can use the getResumeToken() or getTimestamp() methods on ChangeStreamEvent to obtain this information. See the Reference Documentation - Resuming Change Streams for details.

SCDF server dual database connection error

In my spring batch task I have two datasources configured, one for oracle and another for h2.
H2 I'm using for batch and task execution tables and oracle is for real data for batch processing. I'm able to successfully run the task from ide but when I run it from SCDF server I get following error
Caused by: java.sql.SQLException: Unable to start the Universal Connection Pool: oracle.ucp.UniversalConnectionPoolException: Cannot get Connection from Datasource: org.h2.jdbc.JdbcSQLInvalidAuthorizationSpecException: Wrong user name or password [28000-200]
Question is, why is it connecting to H2 for oracle db connection as well?
Following is my DB configuration:
#Bean(name = "OracleUniversalConnectionPool")
public DataSource secondaryDataSource() {
PoolDataSource pds = null;
try {
pds = PoolDataSourceFactory.getPoolDataSource();
pds.setConnectionFactoryClassName(driverClassName);
pds.setURL(url);
pds.setUser(username);
pds.setPassword(password);
pds.setMinPoolSize(Integer.valueOf(minPoolSize));
pds.setInitialPoolSize(10);
pds.setMaxPoolSize(Integer.valueOf(maxPoolSize));
} catch (SQLException ea) {
log.error("Error connecting to the database: ", ea.getMessage());
}
return pds;
}
#Bean(name = "batchDataSource")
#Primary
public DataSource dataSource() throws SQLException {
final SimpleDriverDataSource dataSource = new SimpleDriverDataSource();
dataSource.setDriver(new org.h2.Driver());
dataSource.setUrl("jdbc:h2:tcp://localhost:19092/mem:dataflow");
dataSource.setUsername("sa");
dataSource.setPassword("");
return dataSource;
}
I got it resolved, problem was I was using this for setting values to oracle db.
spring.datasource.url: ******
spring.datasource.username: ******
It worked fine from IDE, but when I ran it on SCDF server it was overwritten by default properties being used for database connection of SCDF server. So I just updated the connection properties to :
spring.datasource.oracle.url: ******
spring.datasource.oracle.username: ******
And now it's working as expected.

Cannot query dockerized MongoDB container from dockerized ASP.NET Core 3.1 container

I have a dockerized .net core container and am trying to query a MongoDB database. I have a REST API that is called to query the database from the .net core container. It seems as if I am able to establish a connection to the container when the instance of the service is created:
private readonly IMongoCollection<DbObject> _dbObjects;
public TaxService(IDbConfig dbConfig)
{
Console.Out.WriteLine("got here: " + dbConfig.ConnectionString);
MongoClient client = new MongoClient(dbConfig.ConnectionString);
IMongoDatabase database = client.GetDatabase(dbConfig.DatabaseName);
_dbObjects = database.GetCollection<DbObject>(dbConfig.SalesTaxCollectionName);
Console.Out.WriteLine("db initialized");
}
Here is the dbConfig object:
"DbConfig": {
"SalesTaxCollectionName": "ExampleCollection",
"ConnectionString": "mongodb://mongo:27017",
"DatabaseName": "ExampleDb"
}
It gets through this code, but when the database is actually queried, a PlatformNotSupportedException is thrown:
public DbObject GetByValue(string value)
{
Console.Out.WriteLine("querying db");
List<DbObject> matches = _dbObjects.Find(dbObject => dbObject.Value.Equals(value)).ToList();
Console.Out.WriteLine("successfully queried"); // Does not reach this point
if (matches.Any())
{
return matches.First();
}
else
{
throw new ArgumentException($"No values matched");
}
}
I have put the rest of the files in gists for convenience:
docker-compose.yml: https://gist.github.com/MinhazMurks/1fbb47afd360bbac48df45b1f0609e33
Dockerfile: https://gist.github.com/MinhazMurks/bb2b7f76d28894a81136d940b5997165
InitExampleDb.js: https://gist.github.com/MinhazMurks/9aae03daceee1e689c4e821419966f41
Full-Log: https://gist.github.com/MinhazMurks/0de9cd822fcf2930065527191127c83b
Any insight would be appreciated!
You can try to set the following line in the host file:
(It is located here in case of Windows: C:\Windows\System32\drivers\etc\hosts)
127.0.0.1 mongo
After this try connect using RoboMongo. In the Address textbox insert mongo and try to connect to it.

Distributed state-machine's zookeeper ensemble fails while processing parallel regions with error KeeperErrorCode = BadVersion

Background :
Diagram :
Statemachine uml state diagram
We have a normal state machine as depicted in diagram that monitors spring-BATCH micro-service(deployed on streams source/processor/sink design) ,for each batch that is started .
We receive sequence of REST calls to internally fire events per batch id on respective batch's machine object. i.e. per batch id the new state machine object is created .
And each machine is having n number of parallel regions(representing spring batch's chunks ) also as shown in the diagram.
REST calls made are using multi-threaded environment where 2 simultaneous calls of same batchId may come for different region Ids of BATCHPROCESSING state .
Up till now we had a single node(single installation) running of this state machine micro-service but now we want to deploy it on multiple instances ; to receive REST calls .
For this , the Distributed State Machine is what we want to introduce . We have below configuration in place for Running Distributed State Machine .
#Configuration
#EnableStateMachine
public class StateMachineUMLWayConfiguration extends
StateMachineConfigurerAdapter<String, String> {
..
..
#Override
public void configure(StateMachineModelConfigurer<String,String> model)
throws Exception {
model
.withModel()
.factory(stateMachineModelFactory());
}
#Bean
public StateMachineModelFactory<String,String> stateMachineModelFactory() {
StorehubBatchUmlStateMachineModelFactory factory =null;
try {
factory = new StorehubBatchUmlStateMachineModelFactory
(templateUMLInClasspath,stateMachineEnsemble());
} catch (Exception e) {
LOGGER.info("Config's State machine factory got exception
:"+factory);
}
LOGGER.info("Config's State machine factory method Called:"+factory);
factory.setStateMachineComponentResolver(stateMachineComponentResolver());
return factory;
}
#Override
public void configure(StateMachineConfigurationConfigurer<String,
String>
config) throws Exception {
config
.withDistributed()
.ensemble(stateMachineEnsemble());
}
#Bean
public StateMachineEnsemble<String, String> stateMachineEnsemble() throws
Exception {
return new ZookeeperStateMachineEnsemble<String, String>(curatorClient(), "/batchfoo1", true, 512);
}
#Bean
public CuratorFramework curatorClient() throws Exception {
CuratorFramework client =
CuratorFrameworkFactory.builder().defaultData(new byte[0])
.retryPolicy(new ExponentialBackoffRetry(1000, 3))
.connectString("localhost:2181").build();
client.start();
return client;
}
StorehubBatchUmlStateMachineModelFactory's build method:
#Override
public StateMachineModel<String, String> build(String batchChunkId) {
Model model = null;
try {
model = UmlUtils.getModel(getResourceUri(resolveResource(batchChunkId)).getPath());
} catch (IOException e) {
throw new IllegalArgumentException("Cannot build model from resource " + resource + " or location " + location, e);
}
UmlModelParser parser = new UmlModelParser(model, this);
DataHolder dataHolder = parser.parseModel();
ConfigurationData<String, String> configurationData = new ConfigurationData<String, String>( null, new SyncTaskExecutor(),
new ConcurrentTaskScheduler() , false, stateMachineEnsemble,
new ArrayList<StateMachineListener<String, String>>(), false,
null, null,
null, null, false,
null , batchChunkId, null,
null ) ;
return new DefaultStateMachineModel<String, String>(configurationData, dataHolder.getStatesData(), dataHolder.getTransitionsData());
}
Created new custom service interface level method in place of DefaultStateMachineService.acquireStateMachine(machineId)
#Override
public StateMachine<String, String> acquireDistributedStateMachine(String machineId, boolean start) {
synchronized (distributedMachines) {
DistributedStateMachine<String,String> distributedStateMachine = distributedMachines.get(machineId);
StateMachine<String,String> distMachineDelegateX = null;
if (distributedStateMachine == null) {
StateMachine<String, String> machine = stateMachineFactory.getStateMachine(machineId);
distributedStateMachine = (DistributedStateMachine<String, String>) machine;
}
distributedMachines.put(machineId, distributedStateMachine);
return handleStart(distributedStateMachine, start);
}
}
Problem :
Now problem is that , micro service deployed on single instance runs successfully even for events received by it are from multi threaded environment where one thread hits with the event REST call belonging to Region 1 and simultaneously other thread comes for region 2 of same batch . Machine goes ahead in synch ,with successful parallel regions' processing , till its last state i.e BATCHCOMPLETED .
Also we checked at zookeeper side that at last the BATCHCOMPLETED STATE was being recorded in node's current version.
But , besides 1st instance , when we keep same micro service app-jar deployed on some other location to treat it as a 2nd instance of micro-service that is also now running to accept event REST calls(say by listening at another tomcat port 9002) ; it fails in middle somewhere randomly . This failure happens randomly after any one of the events among parallel regions is fired and when ensemble.setState() is being called internally on state change of that event .
It gives following error:
[36mo.s.s.support.AbstractStateMachine [0;39m [2m:[0;39m Interceptors threw exception, skipping state change
org.springframework.statemachine.StateMachineException: Error persisting data; nested exception is org.springframework.statemachine.StateMachineException: Error persisting data; nested exception is org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion
at org.springframework.statemachine.zookeeper.ZookeeperStateMachineEnsemble.setState(ZookeeperStateMachineEnsemble.java:241) ~[spring-statemachine-zookeeper-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
at org.springframework.statemachine.ensemble.DistributedStateMachine$LocalStateMachineInterceptor.preStateChange(DistributedStateMachine.java:209) ~[spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.StateMachineInterceptorList.preStateChange(StateMachineInterceptorList.java:101) ~[spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.callPreStateChangeInterceptors(AbstractStateMachine.java:859) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.switchToState(AbstractStateMachine.java:880) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.access$500(AbstractStateMachine.java:81) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine$3.transit(AbstractStateMachine.java:335) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.handleTriggerTrans(DefaultStateMachineExecutor.java:286) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.handleTriggerTrans(DefaultStateMachineExecutor.java:211) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.processTriggerQueue(DefaultStateMachineExecutor.java:449) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.access$200(DefaultStateMachineExecutor.java:65) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor$1.run(DefaultStateMachineExecutor.java:323) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) [spring-core-4.3.13.RELEASE.jar!/:4.3.13.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.scheduleEventQueueProcessing(DefaultStateMachineExecutor.java:352) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.execute(DefaultStateMachineExecutor.java:163) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.sendEventInternal(AbstractStateMachine.java:603) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.sendEvent(AbstractStateMachine.java:218) [spring-statemachine-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
at org.springframework.statemachine.ensemble.DistributedStateMachine.sendEvent(DistributedStateMachine.java:108)
..skipping Lines....
Caused by: org.springframework.statemachine.StateMachineException: Error persisting data; nested exception is org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion
at org.springframework.statemachine.zookeeper.ZookeeperStateMachinePersist.write(ZookeeperStateMachinePersist.java:113) ~[spring-statemachine-zookeeper-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
at org.springframework.statemachine.zookeeper.ZookeeperStateMachinePersist.write(ZookeeperStateMachinePersist.java:50) ~[spring-statemachine-zookeeper-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
at org.springframework.statemachine.zookeeper.ZookeeperStateMachineEnsemble.setState(ZookeeperStateMachineEnsemble.java:235) ~[spring-statemachine-zookeeper-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
... 73 common frames omitted
Caused by: org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion
at org.apache.zookeeper.KeeperException.create(KeeperException.java:115) ~[zookeeper-3.4.8.jar!/:3.4.8--1]
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:1006) ~[zookeeper-3.4.8.jar!/:3.4.8--1]
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:910) ~[zookeeper-3.4.8.jar!/:3.4.8--1]
at org.apache.curator.framework.imps.CuratorTransactionImpl.doOperation(CuratorTransactionImpl.java:159)
Question :
1.So is the configuration mentioned above needs something more to be configured to avoid that exception mentioned above??
Because Both state-machine micro-service instances were tested with the case when they both were connecting to same instance i.e. same string .connectString("localhost:2181").build() or case when they were made to connect to different zookeeper instances(i.e. 'localhost:2181' , 'localhost:2182').
Same exception of BAD VERSION occurs during state machine ensemble's processing in both cases .
2.Also If Batches would run in parallel so their respective machines would need to be created to run in parallel at state-machine micro-service end .
So here , technically new State machine we need for new batchId , running simultaneously .
But looking at the ZookeeperStateMachineEnsemble , One znode path seems to be associated with one ensemble , whenever ensemble object is instantiated once in the main config class ("StateMachineUMLWayConfiguration") .
So is it expected to only use that singleton ensemble instance only? Can't multiple ensembles be created at run-time referencing different znode paths run in parallel to log their respective Distributed State Machine's states to their respective znode paths??
a. Because batches running in parallel would need separate znode paths to be created . Thus due to our attempt of keeping separate znode path per batch , we need separate ensemble to be instantiated per batch's machine. But that seems to be getting into the lock condition while getting connection to znode through curator client.
b. REST call fired for event triggering does not complete , as the machine it acquired is stuck in ensemble to connect .
Thanks in advance .

Spymemcache- Memcache/Membase Faileover

Platform: 64 Bit windows OS, spymemcached-2.7.3.jar, J2EE
We want to use two memcache/membase servers for caching solution. We want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data.
We are using spymemcached java client for setting and getting data from memcache. We are not using any replication between two membase servers.
We loading memcacheClient object at the time of our J2EE application startup.
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
client = new MemcachedClient(serverList, "default", "");
After that we are using memcacheClient to get and set value in memcache/membase server.
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
Looks like memcacheClient is setting and getting and value only from server1.
If we stop server1, it fails to get/set value. Should it not use server2 in case of server1 down? Please let me know if we are doing anything wrong here...
aspymemcached java client dos not handle membase failover for particular node.
Ref : https://blog.serverdensity.com/handling-memcached-failover/
We need to handle it manually(by our code)
We can do this by using ConnectionObserver
Here is my code :
public static void main(String a[]) throws InterruptedException{
try {
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
final ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
final MemcachedClient client = new MemcachedClient(serverList, "bucketName", "");
client.addObserver(new ConnectionObserver() {
#Override
public void connectionLost(SocketAddress arg0) {
//method call when connection lost
for(MemcachedNode node : client.getNodeLocator().getAll()){
if(!node.isActive()){
client.shutdown();
//re init your client here, and after re-init it will connect to your secodry node
break;
}
}
}
#Override
public void connectionEstablished(SocketAddress arg0, int arg1) {
//method call when connection established
}
});
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
} catch (Exception e) {
}
}
client.get() would use first available node and therefore your value would be stored/updated on one node only.
You seems to be a bit contradicting in your requirements - first you're saying that 'we want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data' which implies distributed cache model (particular key is stored on one node in the cache farm) and then you expect to fetch it if that node is down, which obviously won't happen.
If you need your cache farm to survive node failure without losing data cached on that node you should use replication, which is available in MemBase but obviously you would pay the price of storing the same values multiple times so your desire of '1GB per server...total 2GB of cache' won't be possible.