Error inserting multiple columns into Postgresql DB From SQLX - postgresql

I'm trying to perform an insert query into my Postgresql DB, but I am getting a mismatched type issue that I'm unsure how to solve.
Here's the code:
pub struct Product {
pub id: i32,
pub product_id: i64,
pub title: String,
pub handle: String,
pub tags: Json<Vec<String>>,
pub product_type: String,
pub image_url: String,
pub created_at: String,
pub updatedAt: String,
}
pub struct ProductPatch {
pub product_id: i64,
pub title: String,
pub handle: String,
pub tags: Vec<String>,
pub product_type: String,
pub image_url: String
}
async fn add_product(pool: &Db, product: &ProductPatch) -> Result<i64, sqlx::Error> {
let rec = sqlx::query!(
r#"
INSERT INTO products (product_id, title, handle, tags, product_type, image_url)
VALUES ($1, $2, $3, $4, $5, $6)
RETURNING product_id, title, handle, tags, product_type, image_url
"#,
&product.product_id,
&product.title,
&product.handle,
&product.tags,
&product.product_type,
&product.image_url
)
.fetch_one(pool)
.await?;
Ok(rec.product_id)
}
Here's the error:
mismatched types
expected enum Result<_, sqlx::Error>
found enum Result<Record, anyhow::Error>

I suppose you've imported anyhow's Result, so your return type Result<i64, sqlx::Error> is actually interpreted as anyhow::Result<i64, sqlx::Error>.
If you intend to return sqlx::Result, either change your return type to sqlx::Result<i64, sqlx::Error> or change your import statements.
If you intend to return anyhow::Result, maybe just return anyhow::Result<i64>.

Related

Problems working with rust and postgres data types

I'm triying to do an Api REST with rust and postres but I cant make it work because the relation between these two.
The actual problem is that I have a column in postgres as jsonb and when I return the data and try to save it in a struct always gives error. Same problem when I try to save the data.
This are the models.(The option is only because I'm testing thing, it should return a value)
#[derive(Debug, Serialize, Deserialize)]
pub struct CategoryView {
pub id: i32,
pub category_name: String,
pub category_custom_fields: Option<serde_json::Value>,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct CategoryPayload {
pub category_name: String,
pub category_custom_fields: Option<serde_json::Value>,
}
This are the postgres queries:
fn find_all(conn: &mut DbPooled) -> Result<Vec<CategoryView>, DbError> {
let mut query = "SELECT id, category_name, category_custom_fields FROM accounting.categories".to_owned();
query.push_str(" WHERE user_id = $1");
query.push_str(" AND is_deleted = false");
let items = conn.query(&query, &[&unsafe { CURRENT_USER.to_owned() }])?;
let items_view: Vec<CategoryView> = items
.iter()
.map(|h| CategoryView {
id: h.get("id"),
category_name: h.get("category_name"),
category_custom_fields: h.get("category_custom_fields"),
})
.collect();
Ok(items_view)
}
fn add(payload: &CategoryPayload, conn: &mut DbPooled) -> Result<CategoryView, DbError> {
let mut query =
"INSERT INTO accounting.categories (user_id, category_name, category_custom_fields, create_date, update_date)"
.to_owned();
query.push_str(" VALUES ($1, $2, $3, now(), now())");
query.push_str(" RETURNING id");
let item_id = conn
.query_one(
&query,
&[
&unsafe { CURRENT_USER.to_owned() },
&payload.category_name,
&payload.category_custom_fields,
],
)?
.get(0);
let inserted_item = CategoryView {
id: item_id,
category_name: payload.category_name.to_string(),
category_custom_fields: payload.category_custom_fields,
};
Ok(inserted_item)
}
with update happens to but I think is the same solution that the one form the add function.
The error is:
the trait bound `serde_json::Value: ToSql` is not satisfied
the following other types implement trait `ToSql`:
&'a T
&'a [T]
&'a [u8]
&'a str
Box<[T]>
Box<str>
Cow<'a, str>
HashMap<std::string::String, std::option::Option<std::string::String>, H>
and 17 others
required for `std::option::Option<serde_json::Value>` to implement `ToSql`
required for the cast from `std::option::Option<serde_json::Value>` to the object type `dyn ToSql + Sync`rustcClick for full compiler diagnostic`
For what I read serde_json::Value is the equivalent to jsonb so I don't understand it.
I had a similar problem previously trying to work with a decimal value in postgres, I had to change it to integer and save the value multiplied in the database. Is a money column so maybe if you help me with that too I will change it back.
I was hopping some could explain to me how to fix it and why this happens so I can avoid have to ask for help with the datatypes in the future.
The problem was in the depencies.
It looks like some dependencies have features that add aditional functionablility.
I had installed the dependencie without any feature so when I added the features it started to work without issues.
Only had to change from:
[dependencies]
postgres = "0.19.4"
to:
[dependencies]
postgres = { version = "0.19.4", features = ["with-chrono-0_4", "with-serde_json-1"] }
Chrono for dates and serde_json for jsonb.
I'll check the decimal problem but I think will be the same solution.

How to write a good select sub query in SeaORM?

select wallets.*, users.name as name,
(select max(regist_at) from payments
where payment_item_id = 2 and receiver_id = 1) as newest_login_at
from wallets
inner join users on wallets.user_id = users.user_id
where wallets.user_id = 1;
I would like to execute the above sql statement in sea-orm, but I don't know how to do it.
If possible, I would like to write the following natural code, but this is a compile error.
Does anyone know a better way?
rustc 1.65.0-nightly | sea-orm "0.9.2"
#[derive(Debug, FromQueryResult, Serialize, Deserialize)]
pub struct WalletSummary {
pub user_id: i64,
pub name: String,
pub amount: i64,
pub regist_at: DateTimeWithTimeZone,
pub newest_login_at: Option<DateTimeWithTimeZone>,
}
let user_id: i64 = 2;
let wallet_summary = Wallets::find_by_id(user_id)
.column(users::Column::Name)
.column_as(
Payments::find()
.column(payments::Column::RegistAt.max())
.filter(payments::Column::PaymentItemId.eq(1))
.filter(payments::Column::ReceiverId.eq(user_id)),
"newest_login_at"
  )
.join(JoinType::InnerJoin, wallets::Relation::Users.def())
.into_model::<WalletSummary>()
.one(db)
.await?

Typing of return type after mongodb projection

I am making a graphql resolver in rust, and am only fetching the fields from the graphql query in my mongodb database. However Rust complains that the fetched data, of course, is now not of the same type as the specified return type. What is the right way to do something like this.
I guess I could do #[serde(default)], but that doesn't work exactly as expected (I will explain later)
use async_graphql::*;
use serde::{Deserialize, Serialize};
use mongodb::{bson::doc, bson::oid::ObjectId, options::FindOptions, Collection};
#[derive(SimpleObject, Serialize, Deserialize, Debug)]
#[graphql(complex)]
struct Post {
#[serde(rename = "_id")]
pub id: ObjectId,
pub title: String,
// I could do something like
// #[serde(default)]
pub body: String,
}
#[ComplexObject]
impl Post {
async fn text_snippet(&self) -> &str {
let length = self.body.len();
let end = min(length, 5);
&self.body[0..end]
}
}
struct Query;
#[Object]
impl Query {
// fetching posts
async fn posts<'ctx>(&self, ctx: &Context<'ctx>) -> Vec<Post> {
let posts = ctx.data_unchecked::<Collection<Post>>();
let projection = // getting the projection doc here based on graphql fields, lets say doc! {"title": 1}
let options = FindOptions::builder().limit(10).projection(projection).build();
let cursor = posts.find(None, options).await.unwrap();
cursor.try_collect().await.unwrap_or_else(|_| vec![])
}
}
But when I run the query
{
posts {
id
title
textSnippet
}
}
i get
thread 'actix-rt:worker:0' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: BsonDecode(DeserializationError { message: "missing field `body`" }), labels: [] }', server/src/schema/post.rs:20:46
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
and when i do the #[serde(default)] stuff on body, and I then query textSnippet and not body, the textSnippet is an empty string.
How do i fix this?
Could you wrap every field in Post with an Option and let the try_collect fill the returned fields for you?
You can create a struct with those fileds you need and use a collection of the new struct.
use async_graphql::*;
use serde::{Deserialize, Serialize};
use mongodb::{bson::doc, bson::oid::ObjectId, options::FindOptions, Collection};
#[derive(SimpleObject, Serialize, Deserialize, Debug)]
#[graphql(complex)]
struct Post {
#[serde(rename = "_id")]
pub id: ObjectId,
pub title: String,
// I could do something like
// #[serde(default)]
pub body: String,
}
#[derive(SimpleObject, Serialize, Deserialize, Debug)]
#[graphql(complex)]
struct PostTitle {
#[serde(rename = "_id")]
pub id: ObjectId,
pub title: String,
}
struct Query;
#[Object]
impl Query {
// fetching posts
async fn posts<'ctx>(&self, ctx: &Context<'ctx>) -> Vec<PostTitle> {
let posts = ctx.data_unchecked::<Collection<PostTitle>>();
let projection = doc! {"title": 1}
let options = FindOptions::builder().limit(10).projection(projection).build();
let cursor = posts.find(None, options).await.unwrap();
cursor.try_collect().await.unwrap_or_else(|_| vec![])
}
}

DateTime<Utc> compiles but not DateTime<Local> querying a table with a column defined as timestamp with time zone

I have a postgresql-table with a column defined as timestamp with time zone. The table is mapped to this struct:
#[derive(Serialize, Queryable)]
pub struct Location {
pub publication_time: DateTime<Utc>,
pub id: i32,
pub name: String,
pub latitude: BigDecimal,
pub longitude: BigDecimal,
}
The schema have this definition:
table! {
locations {
publication_time -> Timestamptz,
id -> Integer,
name -> Text,
latitude -> Numeric,
longitude -> Numeric,
}
}
(partial) Cargo.toml:
serde = "1.0.125"
serde_json = "1.0.64"
serde_derive = "1.0.125"
diesel = { version = "1.4.6", features = ["postgres", "r2d2", "chrono", "numeric"] }
bigdecimal = { version = "0.1.0", features = ["serde"] }
chrono = { version = "0.4.19", features = ["serde"] }
The function that queries the database:
fn get_all_locations(pool: web::Data<Pool>) -> Result<Vec<Location>, diesel::result::Error> {
let conn = pool.get().unwrap();
let items = locations.load::<Location>(&conn)?;
Ok(items)
}
This is then serialized to a JSON-array using serde_json. The DateTime in the database is 2021-04-08 15:02:02.514+02. When DateTime is Utc the program compiles fine, but the DateTime shown in UTC like 2021-04-08T13:02:02.514Z. I changed publication_time to DateTime<Local> to retain the time zone information but then cargo build fails with:
error[E0277]: the trait bound `DateTime<Local>: FromSql<diesel::sql_types::Timestamptz, Pg>` is not satisfied
--> src/controller.rs:21:27
|
21 | let items = locations.load::<Location>(&conn)?;
| ^^^^ the trait `FromSql<diesel::sql_types::Timestamptz, Pg>` is not implemented for `DateTime<Local>`
|
= help: the following implementations were found:
<DateTime<Utc> as FromSql<diesel::sql_types::Timestamptz, Pg>>
= note: required because of the requirements on the impl of `diesel::Queryable<diesel::sql_types::Timestamptz, Pg>` for `DateTime<Local>`
= note: 2 redundant requirements hidden
= note: required because of the requirements on the impl of `diesel::Queryable<(diesel::sql_types::Timestamptz, diesel::sql_types::Integer, diesel::sql_types::Text, diesel::sql_types::Numeric, diesel::sql_types::Numeric), Pg>` for `models::Location`
= note: required because of the requirements on the impl of `LoadQuery<_, models::Location>` for `locations::table`
I have another program that insert to this table and this works and the only difference is derive(Deserialize, Insertable).
#[derive(Deserialize, Insertable)]
pub struct Location {
pub publication_time: DateTime<Local>,
pub id: i32,
pub name: String,
pub latitude: BigDecimal,
pub longitude: BigDecimal,
}
Mapping a Timestamptz field to a DateTime<Local> is not supported by diesel itself, as it only provides the corresponding impl for DateTime<Utc>.
You can work this around by using the #[diesel(deserialize_as = "…")] attribute on the corresponding field and providing your own deserialization wrapper:
#[derive(Serialize, Queryable)]
pub struct Location {
#[diesel(deserialize_as = "MyDateTimeWrapper")]
pub publication_time: DateTime<Local>,
pub id: i32,
pub name: String,
pub latitude: BigDecimal,
pub longitude: BigDecimal,
}
pub struct MyDatetimeWrapper(DateTime<Local>);
impl Into<DateTime<Local>> for MyDatetimeWrapper {
fn into(self) -> DateTime<Local> {
self.0
}
}
impl<DB, ST> Queryable<ST, DB> for MyDateTimeWrapper
where
DB: Backend,
DateTime<Utc>: Queryable<ST, DB>,
{
type Row = <DateTime<Utc> as Queryable<ST, DB>>::Row;
fn build(row: Self::Row) -> Self {
Self(<DateTime<Utc> as Queryable<ST, DB>>::build(row).with_timezone(&Local))
}
}

Deleting from an associated table with a subquery using Diesel from a postgres database

I have a query that I am trying to translate from SQL into rust/diesel but am running into issues with creating a subquery using diesel.
I am using diesel = "1.4.2" along with the postgres feature.
I have the following schema and models...
#[macro_use]
extern crate diesel;
mod schema {
table! {
jobs (id) {
id -> Int4,
}
appointments (id) {
id -> Int4,
}
categories (id) {
id -> Int4
}
appointments_categories(appointment_id, category_id) {
appointment_id -> Int4,
category_id -> Int4
}
}
}
mod models {
#[derive(Debug, Identifiable)]
pub struct Job {
pub id: i32,
}
#[derive(Debug, Identifiable)]
pub struct Appointment {
pub id: i32,
}
#[derive(Debug, Identifiable)]
pub struct Category {
pub id: i32,
}
#[derive(Debug, Identifiable)]
#[primary_key(appointment_id, appointment_type_id)]
pub struct AppointmentCategory {
pub id: i32,
}
}
fn main() {}
And then I have this SQL query:
DELETE FROM appointments_categories
WHERE ROW ("appointment_id", "category_id")
IN (
SELECT
appointment.id AS appointment_id, appointments_categories. "category_id" FROM appointment
INNER JOIN appointments_categories ON appointments_categories. "appointment_id" = appointment.id
WHERE appointment."job_id" = 125
LIMIT 10000);
So far I have tried to use the following approach but unable to figure out how to bind the subquery/expression.
let sub_query = appointment_dsl::appointment
.inner_join(appt_cat_dsl::appointments_categories)
.filter(appointment_dsl::job_id.eq(job_id))
.select((appointment_dsl::id, appt_cat_dsl::category_id));
let rows_deleted = delete(appt_cat_dsl::appointments_categories
.filter(sql(format!("ROW(appointmentId, appointmentTypeId) IN {}", subquery))))?;
I understand that there are other ways to write the delete query but I need to be able to limit the number of rows that it deletes. The associated/junction table is massive with 3 million rows per job and the job runs every 15min. Deleting it all at once locks the DB up so it isn't an option.
Sorry I can't make a reproducible sample on the rust playground since it doesn't have diesel.