Cannot get any results from ElasticSearch with JDBC River - plugins

I cannot figure out how to use this plugin at all.
I am running this curl:
curl -XPUT 'localhost:9200/_river/faycare_kids/_meta' -d '{
"jdbc":{
"driver" : "org.postgresql.Driver",
"url" : "jdbc:postgresql://localhost:5432/faycare",
"user" : "faycare",
"password" : "password",
"strategy" : "simple",
"poll" : "5s",
"scale" : 0,
"autocommit" : true,
"fetchsize" : 10,
"index" : "faycare",
"type" : "kid",
"max_rows" : 0,
"max_retries" : 3,
"max_retries_wait" : "10s",
"sql" : "SELECT kid.id as _id,kid.first_name,kid.last_name FROM kid;"
}
}'
It returns:
{"ok":true,"_index":"_river","_type":"faycare_kids","_id":"_meta","_version":1}
How do I search/fetch/see my data?
How do I know if anything is indexed?
I tried so many things:
curl -XGET 'localhost:9200/_river/faycare_kids/_search?pretty&q=*'
This gives me info about the _river
curl -XGET 'localhost:9200/faycare/kid/_search?pretty&q=*'
This tells me: "error" : "IndexMissingException[[faycare] missing]"
I am running sudo service elasticsearch start to run it in the background.

For one, I would install elasticsearch head it can be super useful for checking on your cluster.
You can get stats for all indices:
curl -XGET 'http://localhost:9200/_all/_status'
You can check if an index exists:
curl -XHEAD 'http://localhost:9200/myindex'
You should be able to search all indices like this:
curl -XGET 'localhost:9200/_all/_search?q=*'
If nothing shows up, your rivers are probably not working, I would check your logs to see if any errors appear.

The problem is in the way you setting up your river. You specifying and index and type where the river should bulk indexing records in the wrong place.
The proper way of doing this would be this:
curl -XPUT 'localhost:9200/_river/faycare_kids/_meta' -d '{
"type" : "jdbc",
"jdbc":{
"driver" : "org.postgresql.Driver",
"url" : "jdbc:postgresql://localhost:5432/faycare",
"user" : "faycare",
"password" : "password",
"strategy" : "simple",
"poll" : "5s",
"scale" : 0,
"autocommit" : true,
"fetchsize" : 10,
"max_rows" : 0,
"max_retries" : 3,
"max_retries_wait" : "10s",
"sql" : "SELECT kid.id as _id,kid.first_name,kid.last_name FROM kid;"
},
"index":{
"index" : "faycare",
"type" : "kid"
}
}'

I appreciate all of your help. The elastic-head did give me some insight. Apparently I just had something wrong with my JSON For some reason when I changed my JSON to this it worked:
curl -XPUT 'localhost:9200/_river/my_jdbc_river/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"driver" : "org.postgresql.Driver",
"url" : "jdbc:postgresql://localhost:5432/faycare",
"user" : "faycare",
"password" : "hatpants",
"index" : "jdbc",
"type" : "jdbc"
"sql" : "SELECT kid.id as _id,kid.first_name,kid.last_name FROM kid;"
}
}'
I am not sure what specifically needed to be changed to make this work, but it does now work. I am guessing that it is the outer jdbc needed to be added. I am guessing I can change the inner index and type.

I wrote a quick post on using this plugin, hopefully in can give you a little more insight - the post is located here.

Related

Find out user who created database/collection in MongoDB

I have so many applications on my server who are using mongo db. I want to find out the user name which is being used to create specific db/collection.
I see some application is malfunctioning and keeps on creating dbs dynamically. I want to find out the application through the user which is being used.
What i have done so far is, i found out the connection information from the mongodb logs by grepping that database and then ran this query,
db.currentOp(true).inprog.forEach(function(o){if(o.connectionId == 502925 ) printjson(o)});
And this is the result i am getting,
{
"host" : "abcd-server:27017",
"desc" : "conn502925",
"connectionId" : 502925,
"client" : "127.0.0.1:39266",
"clientMetadata" : {
"driver" : {
"name" : "mongo-java-driver",
"version" : "3.6.4"
},
"os" : {
"type" : "Linux",
"name" : "Linux",
"architecture" : "amd64",
"version" : "3.10.0-862.14.4.el7.x86_64"
},
"platform" : "Java/AdoptOpenJDK/1.8.0_212-b03"
},
"active" : false,
"currentOpTime" : "2019-07-02T07:31:39.518-0500"
}
Please let me know if there is any way to find out the user.

How to set timeout on connection in `MongoClient` for VertX in Java

In the startup code of my application I check if the credentials for mongodb are OK. As it is not possible to intercept failures as exceptions, I was advised to issue a request and wait for the timeout. Dirty, but at least it works. However I'm not able to set a different value for the timeout than the default which is set to 30s.
My configuration JSON looks like this
{
"pool_name" : "mongodb",
"host" : "localhost",
"port" : 27017,
"db_name" : "mydb",
"username" : "xxxxxxxxx",
"password" : "xxxxxxxxx",
"authSource" : "admin",
"maxPoolSize" : 5,
"minPoolSize" : 1,
"useObjectId" : true,
"connectTimeoutMS" : 5000,
"socketTimeoutMS" : 5000
}
I create the client with MongoClient mongodb = mongodb = MongoClient.createShared(vertx, mongo_cnf, mongo_cnf.getString("pool_name")); where mongo_cnf is the JSON above.
What am I doing wrong ?

CosmosDB Invalid BSON With Update using $ [duplicate]

Embedded Update query works fine in mlab and atlas but not working in Cosmos DB:
My Collection structure:
{
"_id" : ObjectId("5982f3f97729be2cce108785"),
"password" : "$2y$10$F2P9ITmyKNebpoDaQ1ed4OxxMZSKmKFD9ipiU1klqio239c/nJcme",
"nin" : "123",
"login_status" : 1,
"updated_at" : ISODate("2017-05-16T09:09:03.000Z"),
"created_at" : ISODate("2017-05-16T06:08:47.000Z"),
"files" : [
{
"name" : "abc",
"updated_at" : ISODate("2017-05-16T06:08:48.000Z"),
"created_at" : ISODate("2017-05-16T06:08:48.000Z"),
"_id" : ObjectId("5982f3f97729be2cce108784")
}
],
"name" : "demo",
"email" : "email#gmail.com",
"phone" : "1231234",
}
My query is:
db.rail_zones.update(
{'_id': ObjectId("5982f3f97729be2cce108785"),
'files._id' : ObjectId("5982f3f97729be2cce108784")},
{ $set: {'files.$.name' : "Changed"}})
I get this response:
"acknowledged" : true,
"matchedCount" : 0.0,
"modifiedCount" : 0.0
According to your description, I tested this issue on my side and found the Array Update could not work as expected. I assumed that the Array Update feature has not been implemented in the MongoDB Compatibility layer of Azure CosmosDB. Moreover, I found a feedback Positional array update via '$' query support talking about the similar issue.

How to do custom mapping using mongo connector with elasticsearch

I wanna connect mongodb and elasticsearch. I used mongo connector to connect them. I followed instruction from below link to setup==>
http://vi3k6i5.blogspot.in/2014/12/using-elastic-search-with-mongodb.html
I am able to connect mongodb and elasticsearch. But by default mongo connector created indices in elasticsearch for all databases of mongodb.
I want to create only one index for my one database and I want to insert only selected field of documents. for example: in mongo shell==>
use hotels
db.restaurants.insert(
{
"address" : {
"street" : "2 Avenue",
"zipcode" : "10075",
"building" : "1480",
"coord" : [ -73.9557413, 40.7720266 ],
},
"borough" : "Manhattan",
"cuisine" : "Italian",
"grades" : [
{
"date" : ISODate("2014-10-01T00:00:00Z"),
"grade" : "A",
"score" : 11
},
{
"date" : ISODate("2014-01-16T00:00:00Z"),
"grade" : "B",
"score" : 17
}
],
"name" : "Vella",
"restaurant_id" : "41704620"
}
)
This will create database hotels and collection restaurants. Now I want to create index and I want to put only address field in elasticsearch for that index.
Below are the steps what I tried but thats not working :
First I start mongo connector like below :
Imomadmins-MacBook-Pro:~ jayant$ mongo-connector -m localhost:27017 -t localhost:9200 -d elastic_doc_manager --oplog-ts oplogstatus.txt
Logging to mongo-connector.log.
Then from new shell tab, I made command like :
curl -XPUT 'http://localhost:9200/hotels.restaurants/'
curl -XPUT "http://localhost:9200/hotels.restaurants/string/_mapping" - d'{
"string": {
"properties" : {
"address" : {"type" : "string"}
}
}
}'
But only index is created in elasticsearch named as hotels.restaurants. I can't see any document for index hotels.restaurants.
Please suggest me how to add document for hotels.restaurants
Well I got an answer to my question, while starting mongo connector we can specify collection name and the list of fields we are interested in. Please check below command ==>
$ mongo-connector -m localhost:27017 -t localhost:9200 -d elastic_doc_manager --oplog-ts oplogstatus.txt --namespace-set hotels.restaurants --fields address,grades,name

Implement completion suggester on fields in the elasticsearch server

I am trying to implement the completion suggester to my fields in the elasticsearch server. When I try to execute the curl command
curl -X POST localhost:9200/anisug/_suggest?pretty -d '{
"test" : {
"text" : "n",
"completion" : {
"field" : "header"
}
}
}'
I get an exception:
ElasticSearchException[Field [header] is not a completion suggest
field].
What am I missing out on?
I think, while defining the mapping of anisug, you will need to set the header field for completion suggest. For example, you can use this
curl -X PUT localhost:9200/anisug/_mapping -d '{
"test" : {
"properties" : {
"name" : { "type" : "string" },
"header" : { "type" : "completion",
"index_analyzer" : "simple",
"search_analyzer" : "simple",
"payloads" : true
}
}
}
}'
Similarly, while indexing the data, you'll need to send additional completion information. For more information, visit this link