I used r2d2_postgres to create a connection pool:
fn get_connection_pool(
) -> Result<r2d2::Pool<r2d2_postgres::PostgresConnectionManager<postgres::tls::NoTls>>, Error> {
let manager = PostgresConnectionManager::new(
"host=localhost user=someuser password=hunter2 dbname=mydb"
.parse()
.unwrap(),
NoTls,
);
let pool = r2d2::Pool::new(manager).unwrap();
Ok(pool)
}
And then cloned the connection pool into a warp web request
if let Ok(pool) = pool_conns {
let hello = warp::path!("get_quote" / "co_num" / String / "ctrl_num" / String)
.map(move |co, ctrl| autorate::get_quote(co, ctrl, pool.clone()));
warp::serve(hello).run(([127, 0, 0, 1], 8889)).await;
}
Then called pool.get() inside the request
let mut client = pool.get().unwrap();
but received the runtime error
thread 'main' panicked at 'Cannot start a runtime from within a runtime. This happens because a function
(like 'block_on') attempted to block the current thread while the thread is being used to drive asynchronous tasks.
My question is: in Rust, how should these two concepts work together? Specifically I mean a postgres connection pool and an async web server. I'm thinking I should have a connection pool and be able to pass it into each request to dole out connections as needed. Am I using the wrong connection pool, or just passing it in the wrong way?
Several kind folks on reddit steered me in the right direction. Instead of r2d2, I needed an async connection pool so I switched to deadpool_postgres. Ended up looking like this:
#[tokio::main]
async fn main() {
let mut cfg = Config::new();
cfg.host("yourhost");
cfg.user("youruser");
cfg.password("yourpass");
cfg.dbname("yourdb");
let mgr = Manager::new(cfg, tokio_postgres::NoTls);
let pool = Pool::new(mgr, 16);
let get_quote = warp::path!("get_quote" / "co_num" / String / "ctrl_num" / String)
.and(warp::any().map(move || pool.clone()))
.and_then(autorate::get_quote);
warp::serve(get_quote).run(([127, 0, 0, 1], 8889)).await;
}
And then to use a connection:
pub async fn get_quote(
co: String,
ctrl: String,
pool: deadpool_postgres::Pool,
) -> Result<impl warp::Reply, std::convert::Infallible> {
let co_result = Decimal::from_str(&co);
let ctrl_result = Decimal::from_str(&ctrl);
let client = pool.get().await.unwrap();
if let (Ok(co_num), Ok(ctrl_num)) = (co_result, ctrl_result) {
let orders_result = get_orders(&client, &co_num, &ctrl_num).await;
if let Ok(orders) = orders_result {
if let Ok(rated_orders) = rate_orders(orders, &client).await {
return Ok(warp::reply::json(&rated_orders));
}
}
}
Ok(warp::reply::json(&"No results".to_string()))
}
async fn get_orders(
client: &deadpool_postgres::Client,
co: &Decimal,
ctrl: &Decimal,
) -> Result<Vec<Order>, Error> {
for row in client
.query().await
...
Related
This is my async function that uses rust to connect to an existing mongoDB database. Is there a way to return / export the client variable / object and make it usable in other Files / Functions?
async fn connect_to_db() -> Result<(), Box<dyn Error>> {
// Load the MongoDB connection string from an environment variable (or string):
let client_uri = "mongodb://localhost:27017";
let options =
ClientOptions::parse_with_resolver_config(&client_uri, ResolverConfig::cloudflare())
.await?;
let client = Client::with_options(options)?;
let db = client.database("fiesta");
// Select Collection(s)
let user_col: Collection<User> = db.collection("users");
let skills_col: Collection<Skill> = db.collection("skills");
//ok.
Ok(())
}
Help would be appreciated.
Dependencies:
rocketrs
mongodb
tokio & serde
I have tried multiple things such as changing the lifetimes and changing the return type of the function, but to no avail.
I'm currently trying to create a simple integration test that for example try the signup endpoint.
Coming from many other backend languages I'm used to rollback database after each test.
How can I do this using sqlx?
Is there any way to start sqlx with some kind of test transaction ?
I don't find anything on this.
#[actix_rt::test]
async fn signup_test() {
let params = SignupRequest {
login: "bruce8#wayne.com".into(),
password: "testtest123".into(),
};
let app_state = AppState::init().await;
let mut app = test::init_service(
App::new()
.app_data(web::Data::new(app_state.clone()))
.configure(configure),
)
.await;
let req = test::TestRequest::post() //
.insert_header(("content-type", "application/json"))
.set_json(params)
.uri("/auth")
.to_request();
let resp = test::call_service(&mut app, req).await;
log::info!("----> {}", resp.status());
assert!(resp.status().is_success());
}
I've got an Actix-web server that connects to a Postgres DB.
I've noticed that after a 1000 requests my Postgres DB's RAM usage has spiked.
When I stop actix-web, the RAM held by the db is cleared. This leads me to believe that my code is not releasing the connection.
I cannot find an example of connections actually being released. It looks like it's inferred in everyone else's code.
Here's mine:
async fn hellow_world(a : f32, b : f32, pool: &Pool) -> Result<Value, PoolError> {
let client: Client = pool.get().await?;
let sql = format!("select \"json\" from public.table_a WHERE a={} and b={}", a, b);
let stmt = client.prepare(&sql).await?;
let row = client.query_one(&stmt, &[]).await?;
let result : Value = row.get(0);
Ok(result)
}
#[derive(Deserialize)]
pub struct MyRequest {
a: f32,
b: f32
}
#[get("/hello")]
async fn sv_hellow_world(info: web::Query<MyRequest>, db_pool: web::Data<Pool>) -> Result<HttpResponse, Error> {
let response : Value = hellow_world(info.a, info.b, &db_pool).await?;
Ok(HttpResponse::Ok().json(response))
}
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
dotenv().ok();
let config = Config::from_env().unwrap();
let pool = config.pg.create_pool(tokio_postgres::NoTls).unwrap();
env_logger::from_env(Env::default().default_filter_or("info")).init();
let server = HttpServer::new(move || App::new().wrap(Logger::default()).wrap(Logger::new("%a %{User-Agent}i")).data(pool.clone()).service(sv_hellow_world))
.bind("0.0.0.0:3000")?
.run();
server.await
}
Based on further testing, #Werner determined that the code was piling up server-side prepared statements.
It is not clear whether these statements can be closed using this library.
Either of two approaches can be used to avoid this problem:
Use a single, shared prepared statement
Use the direct query form instead of the prepared statement
I recommend the first approach on principle as it is more efficient and protects against SQL Injection. It should look something like this:
async fn hellow_world(a : f32, b : f32, pool: &Pool) -> Result<Value, PoolError> {
let client: Client = pool.get().await?;
let stmt = client.prepare("select \"json\" from public.table_a WHERE a=$1::numeric and b=$2::numeric").await?;
let row = client.query_one(&stmt, &[&a, &b]).await?;
let result : Value = row.get(0);
Ok(result)
}
Using this code, only one prepared statement should be created on each of the pool's connections.
I am experimenting with a standalone script that will query a Postgres database using Vapor and Fluent. In a normal Vapor API application this is simply done by:
router.get("products") { request in
return Product.query(on: request).all()
}
However, in a standalone script, since there is no "request", I get stuck on what to replace the "request" or DatabaseConnectable with. Here's where I get stuck on:
import Fluent
import FluentPostgreSQL
let databaseConfig = PostgreSQLDatabaseConfig(hostname: "localhost",
username: "test",
database: "test",
password: nil)
let database = PostgreSQLDatabase(config: databaseConfig)
let foo = Product.query(on: <??WhatDoIPutHere??>)
I tried creating an object that conforms to DatabaseConnectable, but couldn't figure out how to correctly get that object to conform.
You will need to create an event loop group to be able to make database requests. SwiftNIO's MultiThreadedEventLoopGroup is good for this:
let worker = MultiThreadedEventLoopGroup(numberOfThreads: 2)
You can change the number of threads used as you need.
Now you can create a connection to the database with that worker:
let conn = try database.newConnection(on: worker)
The connection is in a future, so you can map it and pass the connection in your query:
conn.flatMap { connection in
return Product.query(on: connection)...
}
Make sure you shutdown your worker when you are done with it using shutdownGracefully(queue:_:)
The above is very good, but just clarify how simple it is, when you get it, I have made a small test example for this. Hope it helps you.
final class StandAloneTest : XCTestCase{
var expectation : XCTestExpectation?
func testDbConnection() -> Void {
expectation = XCTestExpectation(description: "Wating")
let databaseConfig = PostgreSQLDatabaseConfig(hostname: "your.hostname.here",
username: "username",
database: "databasename",
password: "topsecretpassword")
let database = PostgreSQLDatabase(config: databaseConfig)
let worker = MultiThreadedEventLoopGroup(numberOfThreads: 2)
let conn = database.newConnection(on: worker)
let sc = SomeClass( a:1, b:2, c:3 ) //etc
//get all the tupples for this Class type in the base
let futureTest = conn.flatMap { connection in
return SomeClass.query(on: connection).all()
}
//or save a new tupple by uncommenting the below
//let futureTest = conn.flatMap { connection in
// return someClassInstantiated.save(on: connection)
//}
//lets just wait for the future to test it
//(PS: this blocks the thread and should not be used in production)
do{
let test = try futureTest.wait()
expectation?.fulfill()
worker.syncShutdownGracefully()
print( test )
}catch{
expectation?.fulfill()
print(error)
}
}
}
//Declare the class you want to test here using the Fluent stuff in some extension
I've got a thread, that maintains a list of sockets, and I'd like to traverse the list, see if there is anything to read, if so - act upon it, if not - move onto the next. The problem is, as soon as I come across the first node, all execution is halted until something comes through on the read.
I'm using std::io::Read::read(&mut self, buf: &mut [u8]) -> Result<usize>
From the doc
This function does not provide any guarantees about whether it blocks waiting for data, but if an object needs to block for a read but cannot it will typically signal this via an Err return value.
Digging into the source, the TcpStream Read implementation is
impl Read for TcpStream {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> { self.0.read(buf) }
}
Which invokes
pub fn read(&mut self, buf: &mut [u8]) -> IoResult<uint> {
let fd = self.fd();
let dolock = || self.lock_nonblocking();
let doread = |nb| unsafe {
let flags = if nb {c::MSG_DONTWAIT} else {0};
libc::recv(fd,
buf.as_mut_ptr() as *mut libc::c_void,
buf.len() as wrlen,
flags) as libc::c_int
};
read(fd, self.read_deadline, dolock, doread)
}
And finally, calls read<T, L, R>(fd: sock_t, deadline: u64, mut lock: L, mut read: R)
Where I can see loops over non blocking reads until data has been retrieved or an error has occurred.
Is there a way to force a non-blocking read with TcpStream?
Updated Answer
It should be noted, that as of Rust 1.9.0, std::net::TcpStream has added functionality:
fn set_nonblocking(&self, nonblocking: bool) -> Result<()>
Original Answer
Couldn't exactly get it with TcpStream, and didn't want to pull in a separate lib for IO operations, so I decided to set the file descriptor as Non-blocking before using it, and executing a system call to read/write. Definitely not the safest solution, but less work than implementing a new IO lib, even though MIO looks great.
extern "system" {
fn read(fd: c_int, buffer: *mut c_void, count: size_t) -> ssize_t;
}
pub fn new(user: User, stream: TcpStream) -> Socket {
// First we need to setup the socket as Non-blocking on POSIX
let fd = stream.as_raw_fd();
unsafe {
let ret_value = libc::fcntl(fd,
libc::consts::os::posix01::F_SETFL,
libc::consts::os::extra::O_NONBLOCK);
// Ensure we didnt get an error code
if ret_value < 0 {
panic!("Unable to set fd as non-blocking")
}
}
Socket {
user: user,
stream: stream
}
}
pub fn read(&mut self) {
let count = 512 as size_t;
let mut buffer = [0u8; 512];
let fd = self.stream.as_raw_fd();
let mut num_read = 0 as ssize_t;
unsafe {
let buf_ptr = buffer.as_mut_ptr();
let void_buf_ptr: *mut c_void = mem::transmute(buf_ptr);
num_read = read(fd, void_buf_ptr, count);
if num_read > 0 {
println!("Read: {}", num_read);
}
println!("test");
}
}