I'm using Clojure's Monger library to connect to a MongeoDB database.
I want to update, insert & remove subdocuments in my Mongo database. MongoDB's $push modifier lets me do this on the root of the searched document. But I want to be able to $push onto a sub-collection. Looking at Monger's tests, it looks possible. But I want to be sure I can push to the child-collection of the 3rd parent. Can Monger do something like this?
(mgcol/update mycollection { :my-criteria-key "my-criteria-value" } { $push { "parent.3.child-collection" "fubar" }} )
Even better would be the ability to have a $where clause in my $push. Is something like this possible?
(mgcol/update mycollection
{ :doc-criteria-key "doc-criteria-value" }
{ $push
{ { $where { parent.child.lastname: 'Smith' } }
"fubar" } }
)
But even on a basic level, when I try the following command in my repl, I get the below error.
The "fubar" database definitely exists
I'm definitely connected to the DB
The { :owner "fubar#gmail.com" } criteria is definitely valid; and
I tried both "content.1.content" and "content.$.content":
repl => (mc/update "fubar" { :owner "fubar#gmail.com" } { $push { "content.1.content" { "fu" "bar" } } } )
ClassCastException clojure.lang.Var$Unbound cannot be cast to com.mongodb.DB monger.collection/update (collection.clj:310)
repl =>
repl =>
repl => (clojure.repl/pst *e)
ClassCastException clojure.lang.Var$Unbound cannot be cast to com.mongodb.DB
monger.collection/update (collection.clj:310)
bkell.run.run-ring/eval2254 (NO_SOURCE_FILE:46)
clojure.lang.Compiler.eval (Compiler.java:6406)
clojure.lang.Compiler.eval (Compiler.java:6372)
clojure.core/eval (core.clj:2745)
clojure.main/repl/read-eval-print--6016 (main.clj:244)
clojure.main/repl/fn--6021 (main.clj:265)
clojure.main/repl (main.clj:265)
user/eval27/acc--3869--auto----30/fn--32 (NO_SOURCE_FILE:1)
java.lang.Thread.run (Thread.java:619)
Had anyone come across this and solved it?
Thanks
You have a three part question, with some inconsistencies and holes in the description. So here is my best guess, hope that it is close.
I can get all three to work given schema matched to your update requests, see test/core.clj below for complete details.
First part: Yes, you can push to the child-collection of the 3rd parent, exactly as you have written.
Second part: You want to move your "$where" clause into the criteria, and use $ in the objNew.
Third part: Yes, your basic update works for me below, exactly as you have written.
The output of "lein test" follows at the bottom. All the best to you in your endeavors.
test/core.clj
(ns free-11749-clojure-subdoc.test.core
(:use [free-11749-clojure-subdoc.core])
(:use [clojure.test])
(:require [monger.core :as mg] [monger.collection :as mgcol] [monger.query])
(:use [monger.operators])
(:import [org.bson.types ObjectId] [com.mongodb DB WriteConcern]))
(deftest monger-sub-document
(mg/connect!)
(mg/set-db! (mg/get-db "test"))
(def mycollection "free11749")
;; first part
(mgcol/remove mycollection)
(is (= 0 (mgcol/count mycollection)))
(def doc1 {
:my-criteria-key "my-criteria-value"
:parent [
{ :child-collection [ "cc0" ] }
{ :child-collection [ "cc1" ] }
{ :child-collection [ "cc2" ] }
{ :child-collection [ "cc3" ] }
{ :child-collection [ "cc4" ] }
]
}
)
(mgcol/insert mycollection doc1)
(is (= 1 (mgcol/count mycollection)))
(mgcol/update mycollection { :my-criteria-key "my-criteria-value" } { $push { "parent.3.child-collection" "fubar" }} )
(def mymap1 (first (mgcol/find-maps mycollection { :my-criteria-key "my-criteria-value" })))
(is (= "fubar" (peek (:child-collection (get (:parent mymap1) 3)))))
(prn (mgcol/find-maps mycollection { :my-criteria-key "my-criteria-value" }))
;; second part
(mgcol/remove mycollection)
(is (= 0 (mgcol/count mycollection)))
(def doc2 {
:doc-criteria-key "doc-criteria-value"
:parent [
{ :child { :lastname [ "Alias" ] } }
{ :child { :lastname [ "Smith" ] } }
{ :child { :lastname [ "Jones" ] } }
]
}
)
(mgcol/insert mycollection doc2)
(is (= 1 (mgcol/count mycollection)))
(mgcol/update mycollection { :doc-criteria-key "doc-criteria-value" "parent.child.lastname" "Smith"} { $push { :parent.$.child.lastname "fubar" } } )
(def mymap2 (first (mgcol/find-maps mycollection { :doc-criteria-key "doc-criteria-value" })))
(is (= "fubar" (peek (:lastname (:child (get (:parent mymap2) 1))))))
(prn (mgcol/find-maps mycollection { :doc-criteria-key "doc-criteria-value" }))
;; third part
(mgcol/remove "fubar")
(is (= 0 (mgcol/count "fubar")))
(def doc3 {
:owner "fubar#gmail.com"
:content [
{ :content [ "cc0" ] }
{ :content [ "cc1" ] }
{ :content [ "cc2" ] }
]
}
)
(mgcol/insert "fubar" doc3)
(is (= 1 (mgcol/count "fubar")))
(mgcol/update "fubar" { :owner "fubar#gmail.com" } { $push { "content.1.content" { "fu" "bar" } } } )
(def mymap3 (first (mgcol/find-maps "fubar" { :owner "fubar#gmail.com" })))
(is (= { :fu "bar" } (peek (:content (get (:content mymap3) 1)))))
(prn (mgcol/find-maps "fubar" { :owner "fubar#gmail.com" }))
)
lein test
Testing free-11749-clojure-subdoc.test.core
({:_id #<ObjectId 4fb3e98447281968f7d42cac>, :my-criteria-key "my-criteria-value", :parent [{:child-collection ["cc0"]} {:child-collection ["cc1"]} {:child-collection ["cc2"]} {:child-collection ["cc3" "fubar"]} {:child-collection ["cc4"]}]})
({:_id #<ObjectId 4fb3e98447281968f7d42cad>, :doc-criteria-key "doc-criteria-value", :parent [{:child {:lastname ["Alias"]}} {:child {:lastname ["Smith" "fubar"]}} {:child {:lastname ["Jones"]}}]})
({:_id #<ObjectId 4fb3e98447281968f7d42cae>, :content [{:content ["cc0"]} {:content ["cc1" {:fu "bar"}]} {:content ["cc2"]}], :owner "fubar#gmail.com"})
Ran 1 tests containing 9 assertions.
0 failures, 0 errors.
Related
I am trying to make a grid with ag-grid and activate sortable and filter, but it doesn't work towards localhost. In the columndefinition, I use
''':sortable true'''
''':filter true''''
But nothing happens. Does anyone know what is wrong?
(ns reagent-ag-grid-ex.core
(:require
[reagent.core :as r]
[cljsjs.ag-grid-react]
[reagent-ag-grid-ex.state :as state]))
;; -------------------------
;; Views
(def ag-adapter (r/adapt-react-class (.-AgGridReact js/agGridReact) ))
;;(defn get-cols [entry]
;; (into [] (map #(hash-map :headerName (-> % key name) :field (-> % key name)) entry)))
;;columnDefs: [ {headerName: "Make", field: "make"}, {headerName: "Model", field: "model"}, {headerName: "Price", field: "price"} ]
;;rowData: [ {make: "Toyota", model: "Celica", price: 35000}, {make: "Ford", model: "Mondeo", price: 32000}, {make: "Porsche", model: "Boxter", price: 72000}]
(def deafult-col-w 200)
(defn width-helper [lst]
(+ (* deafult-col-w (count lst)) 2))
(defn home-page []
[:div [:h2 "Ekspono tag-model"]
[:p "My portfolio / Top Index " [:a {:style {:background-color "#C0C0C0" :float "right" :color "black"}
:href "https://www.google.com" :target "_blank"} "Show problems"]]
[:div {:className "ag-theme-balham" :style {:height 200 :width (width-helper state/cols) :color "purple"}}
[ag-adapter {"columnDefs" state/cols
"rowData" state/rows
"defaultColDef" {:sortable true
:width deafult-col-w}}]]
[:div [:a {:href "https://www.tabyenskilda.se/fredrik-cumlin/" :target "_blank"}
"#copyright Fredrik Cumlin"]]])
;; -------------------------
;; Initialize app
(defn mount-root []
(r/render [home-page] (.getElementById js/document "app")))
(defn init! []
(mount-root))
Upgrade to latest ag-grid-react cljsjs distribution (21.0.1-1) - e.g. using lein project.clj switch dep to [cljsjs/ag-grid-react "21.0.1-1"]. Should work on this version.
Also as a side note, no need to specify prop keys with strings, you can use keywords - it's a bit more idiomatic.
I have the following aggregation:
db.subtitles.aggregate()
.match({})
.group({_id: {chunkId: "$chunk_id"}, text: { $push:"$text"}})
What this will render is a result so:
{
"_id" : {
"chunkId" : "ffdd704b-c441-4b49-a32e-fc2277d99250"
},
"text" : [
"Mula doon, sumasama ako sa grocery, sa palengke, sinusundan ko saan napupunta ang pera.",
"Nagkakaroon sila ng resibo na makikita sa kanilang device.",
"Parang ganun na nga, pero…",
"Kaya parang akong naging buhay na QuickBooks. Gusto ko malaman kung ano ang ginagawa ng mga tao sa pera, magkano kinita nila. ",
"Sa kanilang email o text ay may impormasyon na masasabi mo na \"Itong numero na ito, itong text ay galing halimbawa sa Bank of America, at kumpirmado ito\"",
"Mga 4,500 na interbyu o mahigit pa. Sa buong Silangang Africa, sub Saharan Africa at sa Timog Asia.",
"Sa mga umuusbong na merkado, kapag nagbabayad sila ng kuryente, o kapag sumweldo sila.",
"Hindi ko na gustong makita ang nangyari 3 taon nakalipas. Nais ko lang malaman kung kaya mo itong bayaran sa katapusan ng buwan.",
"Saan ako magpunta?"
]
},
…
What I'd like to do is add another field to this group that gives me a total word count for the text array. In this case roughly 136 words.
How could I adjust my aggregation to accomplish this?
You can calculate words before grouping, so you don't need to deal with array of strings but with a single "text" field.
Starting from v4.2 you can benefit from $regexFindAll operator:
db.subtitles.aggregate([
{ $match: {} },
{ $addFields: { words: { $size: { $regexFindAll: { input: "$text", regex: /\w+/ }}}}},
{ $group: {_id: {chunkId: "$chunk_id"}, text: { $push:"$text"}, words: {$sum: "$words"}}}
])
Please read the docs regarding collation to ensure proper behaviour of \w+ regexp. You may want to add some other characters there, e.g. apostrophe etc depending on language. Precise counting may require quite sophisticated regexs especially for non-english strings. See Regex word count - matching words with apostrophe for inspiration.
You can use $stLenCP and $addFields stage
db.subtitle.aggregate([
{ $match: { "_id": ObjectId("5d5b889c33acba0b89b97cda") } },
{ $addFields: {
"length": {
$strLenCP: {
$reduce: {
input: "$text",
initialValue: "",
in: { $concat: ["$$value", "$$this"] }
}
}
}
}}
])
We have MongoDB-collection which we want to import to Elasticsearch (for now as a one-off effort). For this end, we have exported the collection with monogexport. It is a huge JSON file with entries like the following:
{
"RefData" : {
"DebtInstrmAttrbts" : {
"NmnlValPerUnit" : "2000",
"IntrstRate" : {
"Fxd" : "3.1415"
},
"MtrtyDt" : "2020-01-01",
"TtlIssdNmnlAmt" : "200000000",
"DebtSnrty" : "SNDB"
},
"TradgVnRltdAttrbts" : {
"IssrReq" : "false",
"Id" : "BMTF",
"FrstTradDt" : "2019-04-01T12:34:56.789"
},
"TechAttrbts" : {
"PblctnPrd" : {
"FrDt" : "2019-04-04"
},
"RlvntCmptntAuthrty" : "GB"
},
"FinInstrmGnlAttrbts" : {
"ClssfctnTp" : "DBFNXX",
"ShrtNm" : "AVGO 3.625 10/16/24 c24 (URegS)",
"FullNm" : "AVGO 3 5/8 10/15/24 BOND",
"NtnlCcy" : "USD",
"Id" : "USU1109MAXXX",
"CmmdtyDerivInd" : "false"
},
"Issr" : "549300WV6GIDOZJTVXXX"
}
We are using the following Logstash configuration file to import this data set into Elasticsearch:
input {
file {
path => "/home/elastic/FIRDS.json"
start_position => "beginning"
sincedb_path => "/dev/null"
codec => json
}
}
filter {
mutate {
remove_field => [ "_id", "path", "host" ]
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "firds"
}
}
All this works fine, the data ends up in the index firds of Elasticsearch, and a GET /firds/_search returns all the entries within the _source field.
We understand that this field is not indexed and thus is not searchable, which we are actually after. We want make all of the entries within the original nested JSON searchable in Elasticsearch.
We assume that we have to adjust the filter {} part of our Logstash configuration, but how? For consistency reasons, it would not be bad to keep the original nested JSON structure, but that is not a must. Flattening would also be an option, so that e.g.
"RefData" : {
"DebtInstrmAttrbts" : {
"NmnlValPerUnit" : "2000" ...
becomes a single key-value pair "RefData.DebtInstrmAttrbts.NmnlValPerUnit" : "2000".
It would be great if we could do that immediately with Logstash, without using an additional Python script operating on the JSON file we exported from MongoDB.
EDIT: Workaround
Our current work-around is to (1) dump the MongoDB database to dump.json and then (2) flatten it with jq using the following expression, and finally (3) manually import it into Elastic
ad (2): This is the flattening step:
jq '. as $in | reduce leaf_paths as $path ({}; . + { ($path | join(".")): $in | getpath($path) }) | del(."_id.$oid") '
-c dump.json > flattened.json
References
Walker Rowe: ElasticSearch Nested Queries: How to Search for Embedded Documents
ElasticSearch search in document and in dynamic nested document
Mapping for Nested JSON document in Elasticsearch
Logstash - import nested JSON into Elasticsearch
Remark for the curious: The shown JSON is a (modified) entry from the Financial Instruments Reference Database System (FIRDS), available from the European Securities and Markets Authority (ESMA) who is an European financial regulatory agency overseeing the capital markets.
I have a complexe request that perfectly works in Neo4j Browser that I want to use through API Rest, but there are Clauses I can't cope with.
The syntax looks like :
MATCH p=()-[*]->(node1)
WHERE …
WITH...
....
FOREACH … SET …
I constructed the query with Transactional Cyper as i have been suggested by #cybersam, but I don't manage to use more than one clause anyway.
To give an exemle, if I write the statement in one line :
:POST /db/data/transaction/commit {
"statements": [
{
"statement": "MATCH p = (m)-[*]->(n:SOL {PRB : {PRB1}}) WHERE nodes (p)
MATCH q= (o:SOL {PRB : {PRB2}} RETURN n, p, o, q;",
"parameters": {"PRB1": "Title of problem1", "PRB2": "Title of problem2"}
} ],
"resultDataContents": ["graph"] }
I shall obtain :
{"results":[],"errors":[{"code":"Neo.ClientError.Statement.SyntaxError","message":"Invalid input 'R': expected whitespace, comment, ')' or a relationship pattern (line 1, column 90 (offset: 89))\r\n\"MATCH p = (m)-[*]->(n:SOL {PRB : {PRB1}}) WHERE nodes (p) MATCH q= (o:SOL {PRB : {PRB2}} RETURN n, p, o, q;\"\r\n ^"}]}
But if I put it in several lines, :
:POST /db/data/transaction/commit {
"statements": [
{
"statement": "MATCH p = (m)-[*]->(n:SOL {PRB : {PRB1}})
WHERE nodes (p)
MATCH q= (o:SOL {PRB : {PRB2}}
RETURN n, p, o, q;",
"parameters": {"PRB1": "Title of problem1", "PRB2": "Title of problem2"}
}
],
"resultDataContents": ["graph"]
}
it is said :
{"results":[],"errors":[{"code":"Neo.ClientError.Request.InvalidFormat","message":"Unable
to deserialize request: Illegal unquoted character ((CTRL-CHAR, code
10)): has to be escaped using backslash to be included in string
value\n at [Source: HttpInputOverHTTP#41fa906c; line: 4, column:
79]"}]}
Please, I need your help !
Alex
Using the Transaction Cypher HTTP API, you could just pass the same Cypher statement to the API.
To quote from this section of the doc, here is an example of the simplest way to do that:
Begin and commit a transaction in one request If there is no need to
keep a transaction open across multiple HTTP requests, you can begin a
transaction, execute statements, and commit with just a single HTTP
request.
Example request
POST http://localhost:7474/db/data/transaction/commit
Accept: application/json; charset=UTF-8
Content-Type: application/json
{
"statements" : [ {
"statement" : "CREATE (n) RETURN id(n)"
} ]
}
Example response
200: OK
Content-Type: application/json
{
"results" : [ {
"columns" : [ "id(n)" ],
"data" : [ {
"row" : [ 6 ],
"meta" : [ null ]
} ]
} ],
"errors" : [ ]
}
While working on rational numbers with leon, I have to add as requirement isRational pretty much everywhere.
For example:
import leon.lang._
case class Rational (n: BigInt, d: BigInt) {
def +(that: Rational): Rational = {
require(isRational && that.isRational)
Rational(n * that.d + that.n * d, d * that.d)
} ensuring { _.isRational }
def *(that: Rational): Rational = {
require(isRational && that.isRational)
Rational(n * that.n, d * that.d)
} ensuring { _.isRational }
// ...
def isRational = !(d == 0)
def nonZero = n != 0
}
Is it possible to add a require statement in a class constructor to DRY this code so that it applies to all instances of the data structure? I tried adding it on the first line of the class body but it seems to have no effect...
case class Rational (n: BigInt, d: BigInt) {
require(isRational) // NEW
// ... as before ...
def lemma(other: Rational): Rational = {
Rational(n * other.d + other.n * d, d * other.d)
}.ensuring{_.isRational}
def lemmb(other: Rational): Boolean = {
require(other.d * other.n >= 0)
this <= (other + this)
}.holds
}
This does not prevent leon from creating a Rational(0, 0) for example as the report suggest:
[ Info ] - Now considering 'postcondition' VC for Rational$$plus #9:16...
[ Info ] => VALID
[ Info ] - Now considering 'postcondition' VC for Rational$$times #14:16...
[ Info ] => VALID
[ Info ] - Now considering 'postcondition' VC for Rational$lemma #58:14...
[ Error ] => INVALID
[ Error ] Found counter-example:
[ Error ] $this -> Rational(1, 0)
[ Error ] other -> Rational(1888, -1)
[ Info ] - Now considering 'postcondition' VC for Rational$lemmb #60:41...
[ Error ] => INVALID
[ Error ] Found counter-example:
[ Error ] $this -> Rational(-974, 0)
[ Error ] other -> Rational(-5904, -1)
[ Info ] - Now considering 'precond. (call $this.<=((other + $this)))' VC for Rational$lemmb #62:5...
[ Error ] => INVALID
[ Error ] Found counter-example:
[ Error ] $this -> Rational(-1, 0)
[ Error ] other -> Rational(0, -1)
[ Info ] - Now considering 'precond. (call other + $this)' VC for Rational$lemmb #62:14...
[ Error ] => INVALID
[ Error ] Found counter-example:
[ Error ] $this -> Rational(1, 2)
[ Error ] other -> Rational(7719, 0)
[ Info ] ┌──────────────────────┐
[ Info ] ╔═╡ Verification Summary ╞═══════════════════════════════════════════════════════════════════╗
[ Info ] ║ └──────────────────────┘ ║
[ Info ] ║ Rational$$plus postcondition 9:16 valid U:smt-z3 0.010 ║
[ Info ] ║ Rational$$times postcondition 14:16 valid U:smt-z3 0.012 ║
[ Info ] ║ Rational$lemma postcondition 58:14 invalid U:smt-z3 0.011 ║
[ Info ] ║ Rational$lemmb postcondition 60:41 invalid U:smt-z3 0.018 ║
[ Info ] ║ Rational$lemmb precond. (call $this.<=((ot... 62:5 invalid U:smt-z3 0.015 ║
[ Info ] ║ Rational$lemmb precond. (call other + $this) 62:14 invalid U:smt-z3 0.011 ║
[ Info ] ╟┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╢
[ Info ] ║ total: 6 valid: 2 invalid: 4 unknown 0 0.077 ║
[ Info ] ╚════════════════════════════════════════════════════════════════════════════════════════════╝
(this and other don't always meet the constructor requirement.)
Am I missing something?
The main difficulty with invariants can be decomposed in two problems:
Problem 1
Given
case class A(v: BigInt) {
require(v > 0)
}
Leon would have to inject this requirement in preconditions of all functions taking A as argument, so
def foo(a: A) = {
a.v
} ensuring { _ > 0 }
will need to become:
def foo(a: A) = {
require(a.v > 0)
a.v
} ensuring { _ > 0 }
While trivial for this case, consider the following functions:
def foo2(as: List[A]) = {
require(as.nonEmpty)
a.head.v
} ensuring { _ > 0 }
or
def foo3(as: Set[A], a: A) = {
as contains a
} ensuring { _ > 0 }
Here it is not so easy to constraint foo2 so that the list contains only valid As. Leon would have to synthesize traversal functions on ADTs so that these preconditions can be injected.
Moreover, it is impossible to specify that the Set[A] contains only valid As as Leon lacks capabilities to traverse&constraint the set.
Problem 2
While it would be practical to write the following function:
case class A(a: BigInt) {
require(invariant)
def invariant: Boolean = // ...
}
You have a chicken-and-egg issue, where invariant would be injected with a precondition checking invariant on this.
I believe both problems can be solved (or we can restrict the usage of these invariants), but they constitute the reasons why class invariants have you been trivially implemented yet.