I have approximately this database wrapper class:
class MyDatabase {
private def getMongoClientSettings(): MongoClientSettings = {
val uri = "mongodb://localhost:27017"
val mongoClientSettings = MongoClientSettings.builder().applyConnectionString(ConnectionString(uri))
.serverApi(ServerApi.builder().version(ServerApiVersion.V1).build())
.build()
mongoClientSettings
}
private val mongoClient = MongoClient(getMongoClientSettings)
private val db = mongoClient.getDatabase("myDatabase")
object groups {
val collection = db.getCollection("groups")
...
}
object posts {
val collection = db.getCollection("posts")
...
}
object likes {
val collection = db.getCollection("likes")
...
}
}
The problem is that some inserts succeed, while other inserts silently fail.
For example, in the groups object, this command succeed:
val doc = Document("_id" -> group.id, "name" -> group.name, "screenName" -> group.screenName,
"membersCount" -> group.membersCount, "lastInspectionDate" -> group.lastInspectionDate)
collection.insertOne(doc)
But, in the posts object, the inserts never succeed (here, the id field is not the primary key, the _id key is to be autogenerated and is not the same as id):
val doc = Document("id" -> post.id, "fromId" -> post.fromId, "ownerId" -> post.ownerId,
"publishDate" -> post.publishDate, "text" -> post.text, "isPinned" -> post.isPinned)
collection.insertOne(doc)
The question is: why for the posts collection, the inserts never succeed?
My thoughts:
maybe db.getCollection command somewhat disconfigures the other collections, and I would need to call db.getCollection right before the insertOne command? This is unrealistic.
I thought that the method exited before the insertOne succeed, but for the groups collection, there is no problem.
I tried to perform waiting using this command, still no posts inserted: Await.result(collection.insertOne(doc).toFuture, Duration.Inf)
I found some mentions about the need to "subscribe" to the observable to make the cold steam to start functioning, but I think this was relevant for older versions only.
Configuration:
Linux Ubuntu 18.04, Scala 2.12.14, Mongo-Scala-Driver 4.3.1, MongoDB 5.0.2.
Looking forward to your replies.
After the computer reload and using the Await.result(), everything started to work. Do not know why.
Related
Using ReactiveMongo 0.11 for Scala 2.11. I have an issue where my queries are failing to descend. The following is my Index and ReactiveMongo query:
collection.indexesManager.ensure(Index(Seq("userId" -> IndexType.Ascending, "lastActivity" -> IndexType.Descending), background = true))
def listEfforts(userId: String, page: Int, pageSize: Int): Future[\/[ErrMsg, List[EffortDesc]]] = {
val query = BSONDocument("userId" -> userId)
val sort = BSONDocument("lastActivity" -> -1)
val skipN = (page - 1) * pageSize
val queryOptions = new QueryOpts(skipN = skipN, batchSizeN = pageSize, flagsN = 0)
collection.find(query).options(queryOptions).sort(sort).cursor[EffortDesc](ReadPreference.primaryPreferred).collect[List](pageSize).flatMap {
case list => Future(\/.right(list))
}
}
What's happening is my results are all ascending, even though my sort variable has been set to -1. lastActivity is a Unix timestamp field in milliseconds. I've tried other debugging issues (like recompiling, etc.)
Any idea what could be causing this? Thanks for your help!
Found the issue. If I put a IndexType.Descending on lastActivity field and then additionally sort as "descending" (via "lastActivity" -> -1) MongoDB will first return a descended sort according to its index and then sort it again.
I'm not sure if this is normal/expected behavior in Mongo but changing -1 to 1 fixed the issue.
Using either
("fieldName", BSONInteger.apply(1))
or
("fieldName", BSONInteger.apply(-1))
works for me.
I'm currently trying to create a percolator query with Elastic4s. I've got about this far but I can't seem to find any examples so I'm not sure how this quite works. So I've got:
val percQuery = percolate in esIndex / esType query myQuery
esClient.execute(percQuery)
Every time it runs it doesn't match anything. I figured out I need to be able to percolate on an Id but I can't seem to find any examples on how to do it, not even in the docs. I know with Elastic4s creating queries other than a percolator query lets you specify an id field like:
val query = index into esIndex / esType source myDoc id 12345
I've tried this way for percolate but it doesn't like the id field, does anyone know how this can be done?
I was using Dispatch Http to do this previously but I'm trying to move away from it. Before, I was doing this to submit the percolator query:
url(s"$esUrl/.percolator/$queryId)
.setContentType("application/json", "utf-8")
.setBody(someJson)
.POST
notice the queryId just need something similar to that but in elastic4s.
So you want to add a document and return the queries that are waiting for that id to be added? That seems an odd use for percolate as it will be a one time use only, as only one document can be added per id. You can't do a percolate currently on id in elastic4s, and I'm not sure if you can even do it in elasticsearch itself.
This is the best attempt I can come up with, where you have your own "id" field, which could mirror the 'proper' _id field.
object Test extends App {
import ElasticDsl._
val client = ElasticClient.local
client.execute {
create index "perc" mappings {
"idtest" as(
"id" typed StringType
)
}
}.await
client.execute {
register id "a" into "perc" query {
termQuery("id", "a")
}
}.await
client.execute {
register id "b" into "perc" query {
termQuery("id", "b")
}
}.await
val resp1 = client.execute {
percolate in "perc/idtest" doc("id" -> "a")
}.await
// prints a
println(resp1.getMatches.head.getId)
val resp2 = client.execute {
percolate in "perc/idtest" doc("id" -> "b")
}.await
// prints b
println(resp2.getMatches.head.getId)
}
Written using elastic4s 1.7.4
So after much more researching I figured out how this works with elastic4s. To do this in Elastic4s you actually have to use register instead of percolate like so:
val percQuery = register id queryId into esIndex query myQuery
This will register a percolator query at the id.
This post is related to the issue already raised at Dynamically changing the database shard that I am connecting too.
Pointers to code that should be changed to implement this feature is given at https://github.com/slick/slick/issues/703
Am a newbie to Scala and Slick , Can I get some help ? , as to how to proceed implementing this feature. Is there any slick/scala pattern to do this at application level.
My problem is "I have pool of connections of different shards of MySQL, when I write a query/queries involving ID's (sharding keys), slick should dynamically run that particular query on the respective database shard"
For Example: If I write a query like this
val q = for ( user <- users.filter(_.name === "cat")
post <- posts.filter(_.postedBy === user.id)
comment <- comments.filter(_.postId === post.id)
} yield comment.content
q.run
a trivial case should be like one below.
users += User(id = 1, name = "cat", email = "cat#mat.com") => hits shard no 1
Even if the User ID, Post ID, Comment ID are dynamically produced, slick should hit the correct database shard using some sharding criteria ( key (ID) % 3 ) and everything should happen at the background just like single database query.
To implement the feature at application level
Is there any way to read Query object state dynamically so that I can write a function like
def func(q: Query[Something], shards: Map[Int, Database], num: Int): Unit = {
shards(q.getId % num).withSession{ implicit session => {
q.run
}
}
Usage:
val q = users.insert(User(id = 1, name = "cat", email = "cat#cat.com"))
func(q, shards, 10) => q executes on one of the 10 shards.
Thanks.
I am both new to scala and cashbah. I am trying to
update a document if exists (by _id) and create if doesn't exist.
while updating, update some key values
while updating, update some keys which values are sets, include some data to those sets.
To achive this, I've written this:
DBObject = MongoDBObject("_id" -> uri.toString) ++
$addToSet("appearsOn" -> sourceToAppend) ++
$addToSet("hasElements" -> elementsToAppend) ++
$addToSet("hasTriples" -> triplesToAppend) ++
MongoDBObject("uDate" -> new DateTime)
/* Find and replace here! */
OntologyDocument.dao.collection.findAndModify(
query = MongoDBObject({"_id" -> uri.toString}),
update = update,
upsert = true,
fields = null,
sort = null,
remove = false,
returnNew = true
)
Documents looked by _id, some new items added to appearsOn hasElements hasTriples and uDate is updated.
sourceToAppend elementsToAppend and triplesToAppend are List[String]
When I run this, I got this error:
java.lang.IllegalArgumentException: fields stored in the db can't start with '$' (Bad Key: '$addToSet')
at com.mongodb.DBCollection.validateKey(DBCollection.java:1444) ~[mongo-java-driver-2.11.1.jar:na]
I didn't get it. What is wrong with this query? $addToSet isn't a field, why casbah thinks it is a field? What am I doing wrong here?
The reason its failing is because the update query is invalid (it wont work in the js shell).
$set is implicit for values in the update document, but you can't mix it with other update operators eg $addToSet. If you want to mix $set with other set operators then you can if you are explicit:
val update = $set("uDate" -> new DateTime) ++
$addToSet("appearsOn" -> sourceToAppend,
"hasElements" -> elementsToAppend,
"hasTriples" -> triplesToAppend)
You can't $set "_id" but as thats in the query and its an upsert - it will merge so don't include it in the update statement - otherwise it will error.
Finally, #AsyaKamsky is right if you dont need the returned document - use an update its also atomic.
I've probably missed something obvious, but within the ReactiveMongo API (v0.8) how do you set a limit on the number of documents returned by a query?
I'd like to return the single most recent document added to a collection. This is my code so far:
def getLatest()(implicit reader: reactivemongo.bson.handlers.RawBSONReader[T]): Future[Option[T]] = {
collection.find (QueryBuilder(
queryDoc = Some(BSONDocument()),
sortDoc = Some(BSONDocument("_id" -> BSONInteger(-1)))
)).headOption().mapTo[Option[T]]
}
headOption() works to retrieve a single result, but I'm not explicitly using any kind of Mongo limit clause so I'm worried about this query's impact on the DB. Please help me improve this code. Thanks in advance.
In 0.8 you have to set the batchSize option to 1 in order to tell MongoDB to close the database cursor automatically:
val maybedoc = collection.find(BSONDocument(), QueryOpts().batchSize(1)).headOption
// or using QueryBuilder like you do
val maybedoc2 = collection.find (QueryBuilder(
queryDoc = Some(BSONDocument()),
sortDoc = Some(BSONDocument("_id" -> BSONInteger(-1)))
), QueryOpts().batchSize(1)).headOption()
In 0.9 collections have been refactored and greatly simplified. Now you can do this:
val maybedoc = collection.
find(BSONDocument()).
sort(BSONDocument("_id" -> -1)).
one[BSONDocument]
The one[T] method in 0.9 sets the batchSize flag for you and returns an Option[T].
Yes, the headOption() function limits the query to just one result:
def headOption()(implicit ec: ExecutionContext) :Future[Option[T]] = {
collect[Iterable](1).map(_.headOption)
}
https://github.com/zenexity/ReactiveMongo/blob/0.8/src/main/scala/api/cursor.scala#L180