I am using the mongodb driver from https://github.com/ankhers/mongodb to query a mongodb database in an elixir/phoenix project. A simple query such as
cursor = Mongo.find(:mongo, "posts",%{})
list = Enum.to_list(cursor)
object= Enum.fetch(list,0)
object= elem(object, 1)
new_list=Map.fetch(object, "name")
new_list=elem(new_list,1)
new_cursor= Mongo.find(:mongo, "posts",%{"name" => new_list})
new_list=Enum.to_list(new_cursor)
is no problem, but I am wondering how to perform deeper searches as I have nested jsons such as
{"posts":{"name":"something","attributes":{"aspect":{"and_so_on":"endpoint"}}}}.
So how to get to "endpoint" in this case ?
Your code is extremely not elixir idiomatic, in the first place. One should not reassign values on each step, we usually use pipes Kernel.|>/2 instead.
That said, your original cursor might be written as
list =
:mongo
|> Mongo.find("posts", %{})
|> Enum.fetch(0)
|> elem(1)
|> Map.fetch("name")
|> elem(1)
new_list =
:mongo
|> Mongo.find("posts", %{"name" => list})
|> Enum.to_list()
or, better, with pattern matching
%{"name" => one} =
Mongo.find_one(:mongo, "posts", %{})
one
#⇒ "something"
MongoDB also supports mongo query language in Mongo.find/4 as shown in examples here. To get to the nested element one might do:
Mongo.find(:mongo, "posts",
%{"attributes" => %{"aspect" => "and_so_on"}})
Related
I created a quill query, which should find some data in database by given parameter:
val toFind = "SomeName"
val query = query.find(value => infix"$value = ${lift(toFind)}".as[Boolean])
It works fine when for example I have data in database "SomeName", but if I want to have same results by passing there "somename" I found nothing. The problem is with data case-sensitive.
Is it possible to always find values with case-insensitive way? In quill docs I have not found anything about it.
Ok, I found a solution. It is enough to add LOWER() sql function to infix:
val query = query.find(value => infix"LOWER($value) = ${lift(toFind.toLowerCase)}".as[Boolean])
In Grafana I have a dashboard that uses InfluxDB 1.x as data source, I'm migrating it to use InfluxDB 2.0 data source and Flux querys.
In the Grafana dashboard there is a Variable called "Server" which has the following query defined:
SHOW TAG VALUES ON telegraf WITH KEY = "host"
I'm really struggling creating a similar Variable with Flux query..
Any idea how to accomplish this?
Thanks
Try this:
import "influxdata/influxdb/schema"
schema.measurementTagValues(
bucket: "my_bucket",
tag: "host",
measurement: "my_measurement"
)
this work for me:
from(bucket: "telegraf")
|> range(start: -15m)
|> group(columns: ["host"], mode:"by")
|> keyValues(keyColumns: ["host"])
Note: if you want more time back (e.g. -30d) the performance will be slow, you can solve it by load this query only once (available in grafana variables) or better add some filters and selectors
for example:
from(bucket: "telegraf")
|> range(start: -30d)
|> filter(fn: (r) => r._field == "you field")
|> filter(fn: (r) => /* more filter*/)
|> group(columns: ["host"], mode:"by")
|> first()
|> keyValues(keyColumns: ["host"])
I'm using the following flux-code to extract all host tag-values for the bucket "telegraf" - just as your posted InfluxQL:
import "influxdata/influxdb/schema"
schema.tagValues(bucket: "telegraf", tag: "host")
InfluxDB has a bit about this in their documentation:
https://docs.influxdata.com/influxdb/v2.0/query-data/flux/explore-schema/#list-tag-values
I am using the mongodb driver from https://github.com/ankhers/mongodb to query a mongodb database in an elixir/phoenix project. In another question, I asked about how to query nested jsons. Another issue would be how to query inserted documents. For example, I can do the following in python
date_=db['posts']['name'][name]['date'][date]
Here, 'posts' is the name of the collection, and the others are inserted documents. For example, one 'date' document was inserted through:
db['posts']['name'][name].insert_one({"date":date})
When I want to obtain all the inserted dates in python, I can do
date_list=[]
def get_date(db):
db_posts_name=db['posts']['name'][name]
for date_query in db_posts_name.find():
date_list.append(date_query["date"])
But I am at a loss in elixir/phoenix to do the same thing because if I do something like
list =
:mongo
|> Mongo.find("posts", %{})
|> Enum.fetch(4)
|> elem(1)
|> Map.fetch("name")
|> elem(1)
new_list =
:mongo
|> Mongo.find("posts", %{"name" => list})
another_list=new_list.find("date",%{})
I get error
Called with 3 arguments
%Mongo.Cursor{coll: "posts", conn: #PID<0.434.0>, opts: [slave_ok: true], query: %{"name" => name}, select: nil}
:find
[]
Is there a way to do this ?
Mongo.find returns always a cursor. A cursor is like a streaming api, so you have to
call some function like Enum.take() or Enum.to_list. If you processing very long collections
it is a good idea to use the Stream module instead.
If you want to fetch one document, then you can use Mongo.find_one.
I'm not understanding your example. I assume name is a parameter:
date_list=[]
def get_date(db):
db_posts_name=db['posts']['name'][name]
for date_query in db_posts_name.find():
date_list.append(date_query["date"])
The following code fetches in collection posts all documents which name is equal to the parameter name and returns only the date field:
date_list = :mongo
|> Mongo.find("posts", %{"name" => name}, %{"date" => 1})
|> Enum.map(fn %{"date" => date} -> date end)
By the way you can give elixir-mongodb-driver a try. This implementation supports the bulk api, change streams api and the transaction api as well.
I am trying to get an aggregate in ReactiveMongo 0.12 and Play Framework 2.6 (using JSON collections - not BSON) by filtering dates from a collection called "visitors". A typical document may look like this:
{ "_id": ObjectID("59c33152ca2abb344c575152"), "placeId": ObjectID("59c33152ca2abb344c575152"), "date": ISODate("2017-03-26T00:00:00Z"), "visitors": 1200 }
So from here I want to aggregate this data to get various visitor totals, averages, etc, grouping by placeId (which identifies the place in another collection) and filtering by dates after 15-05-2016.
I've based this on this similar question - without the match it works but with it - it does not. There isn't an error but it just doesn't work:
def getVisitorAggregate(col: JSONCollection) = {
import col.BatchCommands.AggregationFramework.{Group, Match, SumField, AvgField, MinField, MaxField}
val format = new java.text.SimpleDateFormat("dd-MM-YYYY")
val myDate = "15-05-2016"
val parseDate: Date = format.parse(myDate)
val longDate: Long = parseDate.getTime
col.aggregate(
Group(JsString("$placeId"))(
"totalVisitors" -> SumField("visitors"),
"avgVisitors" -> AvgField("visitors"),
"minVisitors" -> MinField("visitors"),
"maxVisitors" -> MaxField("visitors")
),
List(Match(Json.obj("date" -> Json.obj("$gte" -> JsNumber(longDate)))))
)
.map(_.head[VisitorAggregate])
}
I have looked and tested for many hours online and I cannot find the correct syntax but this will be simple for someone who knows I'm sure. Thanks
ISODate is a mongodb type, and Model.aggregate does not cast the arguments, so "date" -> Json.obj("$gte" -> JsNumber(longDate)) is wrong.
You need to use a type that will be converted to the ISODate, I am pretty sure it is not JsNumber.
It is a BSONDateTime type would you use BSON, but you do not.
According to documentation it must be a
JsObject with a $date JsNumber field with the timestamp (milliseconds)
as value
So solution can be (I did not verify):
Match(Json.obj("date" -> Json.obj("$gte" -> Json.obj("$date" -> JsNumber(longDate)))))
I hate to answer my own question here but now that I have figured this out I really want to clarify to others how this is done using Aggregate. Ultimately there were two parts to this question.
1) what is the syntax of querying dates?
As #AndriyKuba mentioned and I had seen in the documentation yet not fully understood; the query is formulated like this:
Json.obj("date" -> Json.obj("$gte" -> Json.obj("$date" -> JsNumber(longDate))))
2) how do I match a query within an Aggregate?
This is more of a question of the order of the query. I was originally trying to use match after grouping and aggregating the data - which is (obviously) only going to filter the data after. As I wanted to first get a date range and then aggregate that data I had to match first - this also meant that some of the syntax had to change accordingly:
def getVisitorAggregate(col: JSONCollection) = {
import col.BatchCommands.AggregationFramework.{Group, Match, SumField, AvgField, MinField, MaxField}
val format = new java.text.SimpleDateFormat("dd-MM-YYYY")
val myDate = "15-05-2016"
val parseDate: Date = format.parse(myDate)
val longDate: Long = parseDate.getTime
col.aggregate(
Match(Json.obj("date" -> Json.obj("$gte" -> Json.obj("$date" -> JsNumber(longDate))))),
List(Group(JsString("$rstId"))(
"totalVisitors" -> SumField("visitors"),
"avgVisitors" -> AvgField("visitors"),
"minVisitors" -> MinField("visitors"),
"maxVisitors" -> MaxField("visitors")
))
)
.map(_.head[VisitorAggregate])
}
Really frustrating that there isn't more documentation out there on using the Play Framework with ReactiveMongo as there are a lot of instances of trying to fathom syntax and logic.
My code is resulting in a rare double or triple insert into the database and I am at a loss as to why. It is very difficult to reproduce but I can look at the timestamps to see the created at time is basically the same when it happens. I believe it only occurs when the CardMeta is not already found.
I figure I need to add a unique key or wrap it in a transaction.
def get_or_create_meta(user, card) do
case Repo.all(from c in CardMeta, where: c.user_id == ^user.id,
where: c.card_id == ^card.id) do
[] ->
%CardMeta{}
metas ->
hd metas
end
end
def bury(user, card) do
get_or_create_meta(user, card)
|> Repo.preload([:card, :user])
|> CardMeta.changeset(%{last_seen: DateTime.utc_now(), user_id: user.id, card_id: card.id,
learning: false, known: false, prev_interval: 0})
|> Repo.insert_or_update
end
Edit: adding changeset source
def changeset(struct, params \\ %{}) do
struct
|> cast(params, [:last_seen, :difficulty, :prev_interval, :due, :known, :learning,
:user_id, :card_id])
|> assoc_constraint(:user)
|> assoc_constraint(:card)
end
Calling bury from the controller
def update(conn, %{"currentCardId" => card_id, "command" => command}) do
# perform some update on card
card = Repo.get!(Card,card_id)
user = Guardian.Plug.current_resource(conn)
case command do
"fail" ->
SpacedRepetition.fail(user, card)
"learn" ->
SpacedRepetition.learn(user, card)
_ ->
SpacedRepetition.bury(user, card)
end
sendNextCard(conn, user)
end
Edit:
I noticed the last_seen field is microseconds different between duplicated rows, whereas the create_at field does not have that resolution. Thus I suspect the insert_or_update call is fine, but the controller is firing twice before the DB updates. This could be something on the client side, which I don't want to think about. So I am just going to add a unique key.
As an alternative to #aliCna's answer, if you don't want to change the primary key on CardMeta, you can put a unique index constraint in the database with a migration:
defmodule YourApp.Repo.Migrations.AddCardMetaUniqueIndex do
use Ecto.Migration
def change do
create unique_index(
:card_meta,
[:card_id, :user_id],
name: :card_meta_unique_index)
end
end
Which you can then handle in your changeset to produce nice errors if conflicts occur:
def changeset(struct, params \\ %{}) do
struct
|> cast(params, [:last_seen, :difficulty, :prev_interval, :due, :known, :learning,
:user_id, :card_id])
|> assoc_constraint(:user)
|> assoc_constraint(:card)
|> unique_constraint(:user_id, name: :card_meta_unique_index)
end
I believe you can solve this by adding a composite primary key on user_id and card_id
defmodule Anything.CardMeta do
use Anything.Web, :model
#primary_key false
schema "card_meta" do
field :user_id, :integer, primary_key: true
field :card_id, :integer, primary_key: true
. . .
timestamps()
end
end
If this does't solve your problem please add your data model here!