In Grafana I have a dashboard that uses InfluxDB 1.x as data source, I'm migrating it to use InfluxDB 2.0 data source and Flux querys.
In the Grafana dashboard there is a Variable called "Server" which has the following query defined:
SHOW TAG VALUES ON telegraf WITH KEY = "host"
I'm really struggling creating a similar Variable with Flux query..
Any idea how to accomplish this?
Thanks
Try this:
import "influxdata/influxdb/schema"
schema.measurementTagValues(
bucket: "my_bucket",
tag: "host",
measurement: "my_measurement"
)
this work for me:
from(bucket: "telegraf")
|> range(start: -15m)
|> group(columns: ["host"], mode:"by")
|> keyValues(keyColumns: ["host"])
Note: if you want more time back (e.g. -30d) the performance will be slow, you can solve it by load this query only once (available in grafana variables) or better add some filters and selectors
for example:
from(bucket: "telegraf")
|> range(start: -30d)
|> filter(fn: (r) => r._field == "you field")
|> filter(fn: (r) => /* more filter*/)
|> group(columns: ["host"], mode:"by")
|> first()
|> keyValues(keyColumns: ["host"])
I'm using the following flux-code to extract all host tag-values for the bucket "telegraf" - just as your posted InfluxQL:
import "influxdata/influxdb/schema"
schema.tagValues(bucket: "telegraf", tag: "host")
InfluxDB has a bit about this in their documentation:
https://docs.influxdata.com/influxdb/v2.0/query-data/flux/explore-schema/#list-tag-values
Related
Unable to change the legend name in Grafana using InfluxDB[flux as query language]. Previously I was using InfluxQL as query language and at that time, grafana provided an option to set the legend name. But after switching to flux, that option seems to be missing. Now it's always showing the legend name as _value, I need to change it to some custom text. Please find below the query I'm using. Thanks for your time in advance.
bucket1 = from(bucket: "NOAA_water_database/autogen")
|> range(start: v.timeRangeStart, stop:v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "ak_api_time" and (r._field == "device_id"))
bucket2 = from(bucket: "NOAA_water_database/autogen")
|> range(start: v.timeRangeStart, stop:v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "ak_app_launch" and (r._field == "device_id"))
union(tables: [bucket1, bucket2])
|> filter(fn: (r) => (r.browser == "chrome"))
|> group(columns: ["device_id"])
|> unique(column: "_value")
|> count(column: "_value")
|> set(key: "_wanted_field", value: "Hi, Mom!")
|> set(key: "_unwanted_field", value: "")
In Grafana, it is possible to add field overrides for the display name (called 'Standard Options > Display Name').
As override value, you can reference values from your query through data links:
Field variables
Field-specific variables are available under __field namespace:
__field.name - the name of the field
__field.labels.<LABEL> - label’s value to the URL. If your label contains dots, then use __field.labels["<LABEL>"] syntax.
For example, in my InfluxDB-setup, sensor readings have a tag 'item'. I can reference that with $__field.labels.item:
If you want to modify the value of a tag in your time-series, you can do that with the map-function.
I am using the mongodb driver from https://github.com/ankhers/mongodb to query a mongodb database in an elixir/phoenix project. A simple query such as
cursor = Mongo.find(:mongo, "posts",%{})
list = Enum.to_list(cursor)
object= Enum.fetch(list,0)
object= elem(object, 1)
new_list=Map.fetch(object, "name")
new_list=elem(new_list,1)
new_cursor= Mongo.find(:mongo, "posts",%{"name" => new_list})
new_list=Enum.to_list(new_cursor)
is no problem, but I am wondering how to perform deeper searches as I have nested jsons such as
{"posts":{"name":"something","attributes":{"aspect":{"and_so_on":"endpoint"}}}}.
So how to get to "endpoint" in this case ?
Your code is extremely not elixir idiomatic, in the first place. One should not reassign values on each step, we usually use pipes Kernel.|>/2 instead.
That said, your original cursor might be written as
list =
:mongo
|> Mongo.find("posts", %{})
|> Enum.fetch(0)
|> elem(1)
|> Map.fetch("name")
|> elem(1)
new_list =
:mongo
|> Mongo.find("posts", %{"name" => list})
|> Enum.to_list()
or, better, with pattern matching
%{"name" => one} =
Mongo.find_one(:mongo, "posts", %{})
one
#⇒ "something"
MongoDB also supports mongo query language in Mongo.find/4 as shown in examples here. To get to the nested element one might do:
Mongo.find(:mongo, "posts",
%{"attributes" => %{"aspect" => "and_so_on"}})
I am using the mongodb driver from https://github.com/ankhers/mongodb to query a mongodb database in an elixir/phoenix project. In another question, I asked about how to query nested jsons. Another issue would be how to query inserted documents. For example, I can do the following in python
date_=db['posts']['name'][name]['date'][date]
Here, 'posts' is the name of the collection, and the others are inserted documents. For example, one 'date' document was inserted through:
db['posts']['name'][name].insert_one({"date":date})
When I want to obtain all the inserted dates in python, I can do
date_list=[]
def get_date(db):
db_posts_name=db['posts']['name'][name]
for date_query in db_posts_name.find():
date_list.append(date_query["date"])
But I am at a loss in elixir/phoenix to do the same thing because if I do something like
list =
:mongo
|> Mongo.find("posts", %{})
|> Enum.fetch(4)
|> elem(1)
|> Map.fetch("name")
|> elem(1)
new_list =
:mongo
|> Mongo.find("posts", %{"name" => list})
another_list=new_list.find("date",%{})
I get error
Called with 3 arguments
%Mongo.Cursor{coll: "posts", conn: #PID<0.434.0>, opts: [slave_ok: true], query: %{"name" => name}, select: nil}
:find
[]
Is there a way to do this ?
Mongo.find returns always a cursor. A cursor is like a streaming api, so you have to
call some function like Enum.take() or Enum.to_list. If you processing very long collections
it is a good idea to use the Stream module instead.
If you want to fetch one document, then you can use Mongo.find_one.
I'm not understanding your example. I assume name is a parameter:
date_list=[]
def get_date(db):
db_posts_name=db['posts']['name'][name]
for date_query in db_posts_name.find():
date_list.append(date_query["date"])
The following code fetches in collection posts all documents which name is equal to the parameter name and returns only the date field:
date_list = :mongo
|> Mongo.find("posts", %{"name" => name}, %{"date" => 1})
|> Enum.map(fn %{"date" => date} -> date end)
By the way you can give elixir-mongodb-driver a try. This implementation supports the bulk api, change streams api and the transaction api as well.
I have a schema two_fa_details where answer and question_id are the fields and both are unique together..
Now when I am trying to insert data into it first it gets inserted but updating it next time isn't working..
It says constraint error.
I have a function set_two_factor_details written for updating table..
The function works fine for inserting the data very firsat time..but when iam updating it...its not working..i have a PUT API for this function.
this is my migration file for schema two_fa_details
def change do
create table(:two_fa_details) do
add :answer, :string
add :userprofile_id, references(:user_profile, on_delete: :nothing)
add :question_id, references(:questions, on_delete: :nothing)
timestamps()
end
create index(:two_fa_details, [:userprofile_id])
create index(:two_fa_details, [:question_id])
create unique_index(:two_fa_details, [:userprofile_id, :question_id], name: :user_twofa_detail)
end
here is a snippet of code
def set_twofactor_details(client_id, twofa_records) do
user = Repo.get_by(UserProfile, client_id: client_id)
twofa_records = Enum.map(twofa_records, &get_twofa_record_map/1)
Enum.map(twofa_records, fn twofa_record ->
Ecto.build_assoc(user, :two_fa_details)
|> TwoFaDetails.changeset(twofa_record)
end)
|> Enum.zip(0..Enum.count(twofa_records))
|> Enum.reduce(Ecto.Multi.new(), fn {record, id}, acc ->
Ecto.Multi.insert_or_update(acc, String.to_atom("twfa_record_#{id}"), record)
end)|>IO.inspect()
|> Ecto.Multi.update(
:update_user,
Ecto.Changeset.change(user, two_factor_authentication: true, force_reset_twofa: false)
)
|> Repo.transaction()|>IO.inspect()
|> case do
{:ok, _} ->
{:ok, :updated}
{:error, _, changeset, _} ->
error_string = get_first_changeset_error(changeset)
Logger.error("Error while updating TWOFA: #{error_string}")
{:error, 41001, error_string}
end
end
the output should be basically updating the table and returning two fa details updated message.
but in the logs its showing constraint error.please help me with this..Iam new to elixir.
{:error, :twfa_record_0,
#Ecto.Changeset<
action: :insert,
changes: %{answer: "a", question_id: 1, userprofile_id: 1},
errors: [
unique_user_twofa_record: {"has already been taken",
[constraint: :unique, constraint_name: "user_twofa_detail"]}
],
data: #Accreditor.TwoFaDetailsApi.TwoFaDetails<>,
valid?: false
>, %{}}
[error] Error while updating TWOFA: `unique_user_twofa_record` has already been taken
You wrote:
the output should be basically updating the table and returning two fa details updated message.
But the code returns:
#Ecto.Changeset<
action: :insert,
changes: %{answer: "a", question_id: 1, userprofile_id: 1},
errors: [
unique_user_twofa_record: {"has already been taken",
[constraint: :unique, constraint_name: "user_twofa_detail"]}
],
data: #Accreditor.TwoFaDetailsApi.TwoFaDetails<>,
valid?: false
>
Look how it says action: :insert. So you are not updating, but inserting, which explain the error.
insert_or_update will only update a record if the record was loaded from the database. In your code, you are building records from scratch, and therefore they will always be an insert. You need to use Repo.get or similar to fetch them before passing them to the changeset so you can finally call insert_or_update.
I tried doing it by using upserts for ecto
and it worked.
here is a snippet of code to refer
Ecto.Multi.insert_or_update(acc, String.to_atom("twfa_record_#{id}"), record,
on_conflict: :replace_all_except_primary_key,
conflict_target: [:userprofile_id, :question_id] )
My code is resulting in a rare double or triple insert into the database and I am at a loss as to why. It is very difficult to reproduce but I can look at the timestamps to see the created at time is basically the same when it happens. I believe it only occurs when the CardMeta is not already found.
I figure I need to add a unique key or wrap it in a transaction.
def get_or_create_meta(user, card) do
case Repo.all(from c in CardMeta, where: c.user_id == ^user.id,
where: c.card_id == ^card.id) do
[] ->
%CardMeta{}
metas ->
hd metas
end
end
def bury(user, card) do
get_or_create_meta(user, card)
|> Repo.preload([:card, :user])
|> CardMeta.changeset(%{last_seen: DateTime.utc_now(), user_id: user.id, card_id: card.id,
learning: false, known: false, prev_interval: 0})
|> Repo.insert_or_update
end
Edit: adding changeset source
def changeset(struct, params \\ %{}) do
struct
|> cast(params, [:last_seen, :difficulty, :prev_interval, :due, :known, :learning,
:user_id, :card_id])
|> assoc_constraint(:user)
|> assoc_constraint(:card)
end
Calling bury from the controller
def update(conn, %{"currentCardId" => card_id, "command" => command}) do
# perform some update on card
card = Repo.get!(Card,card_id)
user = Guardian.Plug.current_resource(conn)
case command do
"fail" ->
SpacedRepetition.fail(user, card)
"learn" ->
SpacedRepetition.learn(user, card)
_ ->
SpacedRepetition.bury(user, card)
end
sendNextCard(conn, user)
end
Edit:
I noticed the last_seen field is microseconds different between duplicated rows, whereas the create_at field does not have that resolution. Thus I suspect the insert_or_update call is fine, but the controller is firing twice before the DB updates. This could be something on the client side, which I don't want to think about. So I am just going to add a unique key.
As an alternative to #aliCna's answer, if you don't want to change the primary key on CardMeta, you can put a unique index constraint in the database with a migration:
defmodule YourApp.Repo.Migrations.AddCardMetaUniqueIndex do
use Ecto.Migration
def change do
create unique_index(
:card_meta,
[:card_id, :user_id],
name: :card_meta_unique_index)
end
end
Which you can then handle in your changeset to produce nice errors if conflicts occur:
def changeset(struct, params \\ %{}) do
struct
|> cast(params, [:last_seen, :difficulty, :prev_interval, :due, :known, :learning,
:user_id, :card_id])
|> assoc_constraint(:user)
|> assoc_constraint(:card)
|> unique_constraint(:user_id, name: :card_meta_unique_index)
end
I believe you can solve this by adding a composite primary key on user_id and card_id
defmodule Anything.CardMeta do
use Anything.Web, :model
#primary_key false
schema "card_meta" do
field :user_id, :integer, primary_key: true
field :card_id, :integer, primary_key: true
. . .
timestamps()
end
end
If this does't solve your problem please add your data model here!