I took the code from the documentation, but it doesn't work.
pub fn get_countries(&self) {
let cursor = self.countries.find(None, None);
for doc in cursor {
println!("{}", doc?)
}
}
mongodb::sync::Cursor<bson::Document> doesn't implement std::fmt::Display
mongodb::sync::Cursor<bson::Document> cannot be formatted with the default formatter
the ? operator can only be applied to values that implement std::ops::Try
the ? operator cannot be applied to type mongodb::sync::Cursor<bson::Document>
Also the cursor.collect() does not work correctly.
the method collect exists for enum std::result::Result<mongodb::sync::Cursor<bson::Document>, mongodb::error::Error>, but its trait bounds were not satisfied
method cannot be called on std::result::Result<mongodb::sync::Cursor<bson::Document>, mongodb::error::Error> due to unsatisfied trait bounds
I tried using cursor.iter() or cursor.into_iter(), the result was the same
Full code of module
use bson::Document;
use mongodb::{
error::Error,
sync::{ Collection, Database},
};
pub struct Core {
db: Database,
countries: Collection<Document>,
}
impl Core {
pub fn new(db: &Database) -> Core {
Core {
db: db.clone(),
countries: db.collection("countries"),
}
}
pub fn get_country(&self, name: &String) -> Result<Option<Document>, Error> {
self.countries.find_one(bson::doc! { "idc": name }, None)
}
pub fn get_countries(&self) {
let cursor = self.countries.find(None, None);
for doc in cursor {
println!("{}", doc?)
}
}
}
It seems that the doc value is returning a Cursor, so I'm guessing that cursor must be rather the Result<Cursor<T>> type returned by the Collection::find method. https://docs.rs/mongodb/latest/mongodb/sync/struct.Collection.html#method.find
Shouldn't you unwrap (or handle the result with a proper match) your self.countries.find(None, None) result ?
pub fn get_countries(&self) {
let cursor = self.countries.find(None, None).unwrap();
for doc in cursor {
println!("{}", doc?)
}
}
My solution
pub fn get_countries(&self) -> Vec<Document> {
let cursor = self.countries.find(None, None).unwrap();
let mut total: Vec<Document> = Vec::new();
for doc in cursor {
total.push(doc.unwrap());
}
total
}
Related
I'm trying to use database connection from a Rocket's on_ignite fairing:
use sqlx::{ self, FromRow };
use rocket::fairing::{self, Fairing, Info, Kind};
use rocket::{Build, Rocket};
use crate::database::PostgresDb;
#[derive(FromRow)]
struct TableRow {
column_a: String,
column_b: String
}
#[rocket::async_trait]
impl Fairing for TableRow {
fn info(&self) -> Info {
Info {
name: "Cache table row",
kind: Kind::Ignite,
}
}
async fn on_ignite(&self, rocket: Rocket<Build>) -> fairing::Result {
let mut db = rocket
.state::<Connection<PostgresDb>>()
.expect("Unable to find db connection.");
let row = sqlx::query_as::<_, TableRow>("SELECT * FROM table WHERE id = 1;")
.fetch_one(&mut db)
.await
.unwrap();
fairing::Result::Ok(rocket.manage(row))
}
}
The problem is I get following rust error during .fetch_one(&mut db):
the trait bound `&mut rocket_db_pools::Connection<PostgresDb>: Executor<'_>` is not satisfied
the following other types implement trait `Executor<'c>`:
<&'c mut PgConnection as Executor<'c>>
<&'c mut PgListener as Executor<'c>>
<&'c mut PoolConnection<Postgres> as Executor<'c>>
<&'t mut Transaction<'c, Postgres> as Executor<'t>>
<&sqlx::Pool<DB> as Executor<'p>>rustcClick for full compiler diagnostic
cache_rbac_on_ignite.rs(56, 14): required by a bound introduced by this call
query_as.rs(132, 17): required by a bound in `QueryAs::<'q, DB, O, A>::fetch_all`
I tried solution suggested here: How to get the database Connection in rocket.rs Fairing. but it did not work out.
Here is the code:
use sqlx::{ self, FromRow, Database };
use rocket::fairing::{self, Fairing, Info, Kind};
use rocket::{Build, Rocket};
use crate::database::PostgresDb;
#[derive(FromRow)]
struct TableRow {
column_a: String,
column_b: String
}
#[rocket::async_trait]
impl Fairing for TableRow {
fn info(&self) -> Info {
Info {
name: "Cache table row",
kind: Kind::Ignite,
}
}
async fn on_ignite(&self, rocket: Rocket<Build>) -> fairing::Result {
let mut db = PostgresDb::get_one(rocket).await.unwrap();
let row = sqlx::query_as::<_, TableRow>("SELECT * FROM table WHERE id = 1;")
.fetch_one(&mut db)
.await
.unwrap();
fairing::Result::Ok(rocket.manage(row))
}
}
I get following rust error on line let mut db = PostgresDb::get_one(rocket).await.unwrap();:
no function or associated item named `get_one` found for struct `PostgresDb` in the current scope
function or associated item not found in `PostgresDb`rustcClick for full compiler diagnostic
mod.rs(8, 1): function or associated item `get_one` not found for this struct
What is the right way to use database connection inside of the fairing? Thank you!
Finally found an answer. Here is what worked for me:
use rocket::fairing::{self, Fairing, Info, Kind};
use rocket::{Build, Rocket};
use rocket_db_pools::{ sqlx::{ self, FromRow }, Database };
use crate::database::PostgresDb;
#[derive(FromRow)]
struct TableRow {
column_a: String,
column_b: String
}
#[rocket::async_trait]
impl Fairing for TableRow {
fn info(&self) -> Info {
Info {
name: "Cache table row",
kind: Kind::Ignite,
}
}
async fn on_ignite(&self, rocket: Rocket<Build>) -> fairing::Result {
let db = PostgresDb::fetch(&rocket).unwrap();
let mut conn = db.aquire().await.unwrap();
let row = sqlx::query_as::<_, TableRow>("SELECT * FROM table WHERE id = 1;")
.fetch_one(&mut conn)
.await
.unwrap();
fairing::Result::Ok(rocket.manage(row))
}
}
I am using diesel diesel = { version = "1.4.8", features = ["postgres","64-column-tables","chrono"] } to do a pagination with rust 1.59.0, this is the key part to do the pagination query in diesel:
use diesel::pg::Pg;
use diesel::query_builder::{AstPass, QueryFragment};
use diesel::QueryResult;
use diesel::sql_types::BigInt;
use crate::common::query::pagination::Paginated;
pub fn handle_table_query<T: QueryFragment<Pg>>(this: &Paginated<T>, mut out: AstPass<Pg>) -> QueryResult<()> {
out.push_sql("SELECT *, COUNT(*) OVER () FROM ");
if this.is_sub_query {
out.push_sql("(");
}
this.query.walk_ast(out.reborrow())?;
if this.is_sub_query {
out.push_sql(")");
}
out.push_sql(" t LIMIT ");
out.push_bind_param::<BigInt, _>(&this.per_page)?;
out.push_sql(" OFFSET ");
let offset = (this.page - 1) * this.per_page;
out.push_bind_param::<BigInt, _>(&offset)?;
Ok(())
}
this code will generate the sql like this:
select
*,
COUNT(*) over ()
from
(
select
"article"."id",
"article"."user_id",
"article"."title",
"article"."author",
"article"."guid",
"article"."created_time",
"article"."updated_time",
"article"."link",
"article"."pub_time",
"article"."sub_source_id",
"article"."cover_image",
"article"."channel_reputation",
"article"."editor_pick"
from
"article"
where
"article"."id" > $1) t
limit $2 offset $3
as you know, this sql have a big problem. when the article table data increase. this sub query will cause sequence scan. Now the article table contains 2000000 rows and each time query takes more than 20s. What I am trying to do is remove the window function and move the limit condition into the sub query, the finally sql will look like this:
select
*,
count_estimate('select * from article')
from
(
select
"article"."id",
"article"."user_id",
"article"."title",
"article"."author",
"article"."guid",
"article"."created_time",
"article"."updated_time",
"article"."link",
"article"."pub_time",
"article"."sub_source_id",
"article"."cover_image",
"article"."channel_reputation",
"article"."editor_pick"
from
"article"
where
"article"."id" > $1 limit $2 offset $3 ) t
this sql only take less than 100ms. This is the rust code I am tweak:
pub fn handle_big_table_query<T: QueryFragment<Pg>>(this: &Paginated<T>, mut out: AstPass<Pg>)-> QueryResult<()>{
out.push_sql("SELECT *, count_estimate('select * from article') FROM ");
if this.is_sub_query {
out.push_sql("(");
}
this.query.walk_ast(out.reborrow())?;
if this.is_sub_query {
out.push_sql(" t LIMIT ");
out.push_bind_param::<BigInt, _>(&this.per_page)?;
out.push_sql(" OFFSET ");
let offset = (this.page - 1) * this.per_page;
out.push_bind_param::<BigInt, _>(&offset)?;
out.push_sql(")");
}
Ok(())
}
to my surprise, this new code generate sql did not return any content. is it possible to see the sql? I check my rust source code but did not figure out where is going wrong. And this is the full pagination code:
use diesel::prelude::*;
use diesel::query_dsl::methods::LoadQuery;
use diesel::query_builder::{QueryFragment, Query, AstPass};
use diesel::pg::Pg;
use diesel::sql_types::BigInt;
use diesel::QueryId;
use serde::{Serialize, Deserialize};
use crate::common::query::page_query_handler::{handle_big_table_query, handle_table_query};
pub trait PaginateForQueryFragment: Sized {
fn paginate(self, page: i64, is_big_table: bool) -> Paginated<Self>;
}
impl<T> PaginateForQueryFragment for T
where T: QueryFragment<Pg>{
fn paginate(self, page: i64, is_big_table: bool) -> Paginated<Self> {
Paginated {
query: self,
per_page: 10,
page,
is_sub_query: true,
is_big_table
}
}
}
#[derive(Debug, Clone, Copy, QueryId, Serialize, Deserialize, Default)]
pub struct Paginated<T> {
pub query: T,
pub page: i64,
pub per_page: i64,
pub is_sub_query: bool,
pub is_big_table: bool
}
impl<T> Paginated<T> {
pub fn per_page(self, per_page: i64) -> Self {
Paginated { per_page, ..self }
}
pub fn load_and_count_pages<U>(self, conn: &PgConnection) -> QueryResult<(Vec<U>, i64)>
where
Self: LoadQuery<PgConnection, (U, i64)>,
{
let per_page = self.per_page;
let results = self.load::<(U, i64)>(conn)?;
let total = results.get(0).map(|x| x.1).unwrap_or(0);
let records = results.into_iter().map(|x| x.0).collect();
let total_pages = (total as f64 / per_page as f64).ceil() as i64;
Ok((records, total_pages))
}
pub fn load_and_count_pages_total<U>(self, conn: &PgConnection) -> QueryResult<(Vec<U>, i64, i64)>
where
Self: LoadQuery<PgConnection, (U, i64)>,
{
let per_page = self.per_page;
let results = self.load::<(U, i64)>(conn)?;
let total = results.get(0).map(|x| x.1).unwrap_or(0);
let records = results.into_iter().map(|x| x.0).collect();
let total_pages = (total as f64 / per_page as f64).ceil() as i64;
Ok((records, total_pages,total))
}
}
impl<T: Query> Query for Paginated<T> {
type SqlType = (T::SqlType, BigInt);
}
impl<T> RunQueryDsl<PgConnection> for Paginated<T> {}
impl<T> QueryFragment<Pg> for Paginated<T>
where
T: QueryFragment<Pg>,
{
fn walk_ast(&self, mut out: AstPass<Pg>) -> QueryResult<()> {
if self.is_big_table {
handle_big_table_query(&self, out);
}else{
handle_table_query(&self,out);
}
Ok(())
}
}
#[derive(Debug, Clone, Copy, QueryId)]
pub struct QuerySourceToQueryFragment<T> {
query_source: T,
}
impl<FC, T> QueryFragment<Pg> for QuerySourceToQueryFragment<T>
where
FC: QueryFragment<Pg>,
T: QuerySource<FromClause=FC>,
{
fn walk_ast(&self, mut out: AstPass<Pg>) -> QueryResult<()> {
self.query_source.from_clause().walk_ast(out.reborrow())?;
Ok(())
}
}
pub trait PaginateForQuerySource: Sized {
fn paginate(self, page: i64, is_big_table: bool) -> Paginated<QuerySourceToQueryFragment<Self>>;
}
impl<T> PaginateForQuerySource for T
where T: QuerySource {
fn paginate(self, page: i64, is_big_table: bool) -> Paginated<QuerySourceToQueryFragment<Self>> {
Paginated {
query: QuerySourceToQueryFragment {query_source: self},
per_page: 10,
page,
is_sub_query: false,
is_big_table
}
}
}
I finally found using this code block generates the SQL command in the wrong format, tweaking the code like this will fix it:
pub fn handle_big_table_query<T: QueryFragment<Pg>>(this: &Paginated<T>, mut out: AstPass<Pg>) -> QueryResult<()> {
out.push_sql("SELECT *, count_estimate('select * from article') FROM ");
if this.is_sub_query {
out.push_sql("(");
}
this.query.walk_ast(out.reborrow())?;
if this.is_sub_query {
out.push_sql(" LIMIT ");
out.push_bind_param::<BigInt, _>(&this.per_page)?;
out.push_sql(" OFFSET ");
let offset = (this.page - 1) * this.per_page;
out.push_bind_param::<BigInt, _>(&offset)?;
out.push_sql(") t");
}
Ok(())
}
I am making a graphql resolver in rust, and am only fetching the fields from the graphql query in my mongodb database. However Rust complains that the fetched data, of course, is now not of the same type as the specified return type. What is the right way to do something like this.
I guess I could do #[serde(default)], but that doesn't work exactly as expected (I will explain later)
use async_graphql::*;
use serde::{Deserialize, Serialize};
use mongodb::{bson::doc, bson::oid::ObjectId, options::FindOptions, Collection};
#[derive(SimpleObject, Serialize, Deserialize, Debug)]
#[graphql(complex)]
struct Post {
#[serde(rename = "_id")]
pub id: ObjectId,
pub title: String,
// I could do something like
// #[serde(default)]
pub body: String,
}
#[ComplexObject]
impl Post {
async fn text_snippet(&self) -> &str {
let length = self.body.len();
let end = min(length, 5);
&self.body[0..end]
}
}
struct Query;
#[Object]
impl Query {
// fetching posts
async fn posts<'ctx>(&self, ctx: &Context<'ctx>) -> Vec<Post> {
let posts = ctx.data_unchecked::<Collection<Post>>();
let projection = // getting the projection doc here based on graphql fields, lets say doc! {"title": 1}
let options = FindOptions::builder().limit(10).projection(projection).build();
let cursor = posts.find(None, options).await.unwrap();
cursor.try_collect().await.unwrap_or_else(|_| vec![])
}
}
But when I run the query
{
posts {
id
title
textSnippet
}
}
i get
thread 'actix-rt:worker:0' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: BsonDecode(DeserializationError { message: "missing field `body`" }), labels: [] }', server/src/schema/post.rs:20:46
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
and when i do the #[serde(default)] stuff on body, and I then query textSnippet and not body, the textSnippet is an empty string.
How do i fix this?
Could you wrap every field in Post with an Option and let the try_collect fill the returned fields for you?
You can create a struct with those fileds you need and use a collection of the new struct.
use async_graphql::*;
use serde::{Deserialize, Serialize};
use mongodb::{bson::doc, bson::oid::ObjectId, options::FindOptions, Collection};
#[derive(SimpleObject, Serialize, Deserialize, Debug)]
#[graphql(complex)]
struct Post {
#[serde(rename = "_id")]
pub id: ObjectId,
pub title: String,
// I could do something like
// #[serde(default)]
pub body: String,
}
#[derive(SimpleObject, Serialize, Deserialize, Debug)]
#[graphql(complex)]
struct PostTitle {
#[serde(rename = "_id")]
pub id: ObjectId,
pub title: String,
}
struct Query;
#[Object]
impl Query {
// fetching posts
async fn posts<'ctx>(&self, ctx: &Context<'ctx>) -> Vec<PostTitle> {
let posts = ctx.data_unchecked::<Collection<PostTitle>>();
let projection = doc! {"title": 1}
let options = FindOptions::builder().limit(10).projection(projection).build();
let cursor = posts.find(None, options).await.unwrap();
cursor.try_collect().await.unwrap_or_else(|_| vec![])
}
}
Trying to store incomming data into mongo using r2d2-mongodb and actix_web.
#[derive(Serialize, Deserialize, Debug)]
struct Weight {
desc: String,
grams: u32,
}
fn store_weights(weights: web::Json<Vec<Weight>>, db: web::Data<Pool<MongodbConnectionManager>>) -> Result<String> {
let conn = db.get().unwrap();
let coll = conn.collection("weights");
for weight in weights.iter() {
coll.insert_one(weight.into(), None).unwrap();
}
Ok(String::from("ok"))
}
I can't seem to understand what/how I need to convert weight into to use with insert_one.
The above code errors into error[E0277]: the trait bound 'bson::ordered::OrderedDocument: std::convert::From<&api::weight::Weight>' is not satisfied
The signature for insert_one is:
pub fn insert_one(
&self,
doc: Document,
options: impl Into<Option<InsertOneOptions>>
) -> Result<InsertOneResult>
Document is bson::Document, an alias for bson::ordered::OrderedDocument.
Your type Weight does not implement the trait Into<Document>, which is required for weight::into(). You could implement it, but a more idiomatic way would be using the Serialize trait with bson::to_bson:
fn store_weights(weights: Vec<Weight>) -> Result<&'static str, Box<dyn std::error::Error>> {
let conn = db.get()?;
let coll = conn.collection("weights");
for weight in weights.iter() {
let document = match bson::to_bson(weight)? {
Document(doc) => doc,
_ => unreachable!(), // Weight should always serialize to a document
};
coll.insert_one(document, None)?;
}
Ok("ok")
}
Notes:
to_bson returns an enum, Bson, which can be Array, Boolean, Document, etc. We use match to make sure it is Document.
I've used ? instead of unwrap, to make use of the Result return type. Make sure the errors are Into<Error> for your Result type.
Returning &'static str instead of allocating a new String for each request.
I'd like to create a setter/getter pair of functions where the names are automatically generated based on a shared component, but I couldn't find any example of macro rules generating a new name.
Is there a way to generate code like fn get_$iden() and SomeEnum::XX_GET_$enum_iden?
If you are using Rust >= 1.31.0 I would recommend using my paste crate which provides a stable way to create concatenated identifiers in a macro.
macro_rules! make_a_struct_and_getters {
($name:ident { $($field:ident),* }) => {
// Define the struct. This expands to:
//
// pub struct S {
// a: String,
// b: String,
// c: String,
// }
pub struct $name {
$(
$field: String,
)*
}
paste::item! {
// An impl block with getters. Stuff in [<...>] is concatenated
// together as one identifier. This expands to:
//
// impl S {
// pub fn get_a(&self) -> &str { &self.a }
// pub fn get_b(&self) -> &str { &self.b }
// pub fn get_c(&self) -> &str { &self.c }
// }
impl $name {
$(
pub fn [<get_ $field>](&self) -> &str {
&self.$field
}
)*
}
}
};
}
make_a_struct_and_getters!(S { a, b, c });
My mashup crate provides a stable way to create new identifiers that works with any Rust version >= 1.15.0.
#[macro_use]
extern crate mashup;
macro_rules! make_a_struct_and_getters {
($name:ident { $($field:ident),* }) => {
// Define the struct. This expands to:
//
// pub struct S {
// a: String,
// b: String,
// c: String,
// }
pub struct $name {
$(
$field: String,
)*
}
// Use mashup to define a substitution macro `m!` that replaces every
// occurrence of the tokens `"get" $field` in its input with the
// concatenated identifier `get_ $field`.
mashup! {
$(
m["get" $field] = get_ $field;
)*
}
// Invoke the substitution macro to build an impl block with getters.
// This expands to:
//
// impl S {
// pub fn get_a(&self) -> &str { &self.a }
// pub fn get_b(&self) -> &str { &self.b }
// pub fn get_c(&self) -> &str { &self.c }
// }
m! {
impl $name {
$(
pub fn "get" $field(&self) -> &str {
&self.$field
}
)*
}
}
}
}
make_a_struct_and_getters!(S { a, b, c });
No, not as of Rust 1.22.
If you can use nightly builds...
Yes: concat_idents!(get_, $iden) and such will allow you to create a new identifier.
But no: the parser doesn't allow macro calls everywhere, so many of the places you might have sought to do this won't work. In such cases, you are sadly on your own. fn concat_idents!(get_, $iden)(…) { … }, for example, won't work.
There's a little known crate gensym that can generate unique UUID names and pass them as the first argument to a macro, followed by a comma:
macro_rules! gen_fn {
($a:ty, $b:ty) => {
gensym::gensym!{ _gen_fn!{ $a, $b } }
};
}
macro_rules! _gen_fn {
($gensym:ident, $a:ty, $b:ty) => {
fn $gensym(a: $a, b: $b) {
unimplemented!()
}
};
}
mod test {
gen_fn!{ u64, u64 }
gen_fn!{ u64, u64 }
}
If all you need is a unique name, and you don't care what it is, that can be useful. I used it to solve a problem where each invocation of a macro needed to create a unique static to hold a singleton struct. I couldn't use paste, since I didn't have unique identifiers I could paste together in the first place.