Setting up a golang API that queries a database every hour to refresh its data - rest

I'm relatively new to golang and could use some high-level tips on how to structure this. I'm trying to a build a REST API:
The user would provide a small JSON payload via POST method
The API compares the user’s input data against a reference dataset stored as a slice of structs and calculates a value
This value is returned to the user
Every hour a database is queried to and replaces the slice of structs dataset with another slice of structs dataset. Basically this refreshes the reference data
I'd like this refreshing job to be async so it doesn't slow down the user experience
I'm using the golang's Echo framework (https://echo.labstack.com/). Here is my attempt in golang-like pseudocode.
How would you structure this API to refresh the data hourly async?
To clarify, the part Im stuck on is the “query the DB every hour async in the background” bit. Im unsure how to do that in thos scenario.
func main() {
e := echo.New()
e.POST("/", func(something) {
// This func queries the DB and saves reference dataset result as a slice of structs
dataset := refreshDB()
// Does some calculations on input JSON data and reference dataset
result := doCalcs(inputJSON, dataset)
// Prep response in neat JSON
responseForUser := prepOutput(result)
return responseForUser
})
}

For async code in Go you can use a goroutine, to execute the code periodically you can use a ticker.
package main
import (
"fmt"
"time"
"sync"
)
var rwm sync.RWMutex
var sliceOfStructs []struct{/* ... */}
func main() {
go func() {
tick := time.NewTicker(time.Hour)
for range tick.C {
rwm.Lock()
sliceOfStructs = []struct{}{ /* refresh with new data */ }
rwm.Unlock()
}
}()
// start server
}
If sliceOfStructs needs to be accessible across multiple packages then you'll need to export it and move it to a non-main package, i.e. one that can be imported. And do the same for rwm.
Make sure that any code that reads sliceOfStructs invokes rwm.RLock before reading and rwm.RUnlock when done.
If you have more than one goroutine that needs to write sliceOfStructs then you should change rwm from sync.RWMutex to sync.Mutex.

Related

React query compare old dat with new data in refetchInterval

from a react query call I have a refreshInterval where I get a function call that returns the fresh data.
refetchInterval: (data) => (compareData(data) ? false : 1000),
const compareData(freshData) {
... would like access previous data to compare with freshData
.. if different stop interval
}
I want a way to get the previous data from the refecthInterval function. Is there a way to do this ?
So far all I can get back is the fresh data. I want to be able to compare my new fresh data with previous stale data and do a comparison.
I've seen something called isDataEqual that you can set on the config of the query but can't find any docs on how to use it.

Is it possible to deserialize tokio_postgres rows without Struct?

I am new to Rust and trying to build a simple API server which connects to a Postgresql db which has a API route that runs a direct sql query and output JSON as the result.
I did google and found that all the examples used in all the packages available required to unwrap the data per row into a Struct first and this is something I am trying to bypass. I would like the ability to run a dynamic sql query and output it as JSON data to the client.
I am using actix-web, deadpool-postgres and tokio_postgres
Here is what I have so far
main.rs
use actix_web::{dev::ServiceRequest, web, App, HttpServer};
use deadpool_postgres::{Manager, Pool};
use tokio_postgres::{Config, NoTls};
mod handlers;
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
dotenv::dotenv().ok();
std::env::set_var("RUST_LOG", "actix_web=debug");
let mut cfg = Config::new();
cfg.host("localhost");
cfg.port(5432);
cfg.user("postgres");
cfg.password("postgres");
cfg.dbname("testdb");
let mgr = Manager::new(cfg, NoTls);
let pool = Pool::new(mgr, 100);
// Start http server
HttpServer::new(move || {
App::new()
.data(pool.clone())
.route("/ExecuteQuery", web::get().to(handlers::execute_query))
})
.bind("127.0.0.1:8081")?
.run()
.await
}
Here's the handlers.rs
use actix_web::{web, HttpResponse, Error}; // Responder};
use deadpool_postgres::{Pool};
// use tokio_postgres::{Error};
pub async fn execute_query(db: web::Data<Pool>) -> Result<HttpResponse, Error> {
let mut conn = db.get().await.unwrap();
let statment = conn.prepare("Select * From People").await.unwrap();
let rows = conn.query(&statment, &[]).await?;
// I am trying to use do the following lines but its giving an type mismatched compile error
// let people = serde_postgres::from_rows(&rows).unwrap();
// let json = rustc_serialize::json::encode(people).unwrap();
Ok(HttpResponse::Ok().json("Route called successfully"))
}
Could someone please share your code snippet if you are able to do this without Struct.
Thanks
As far as I know, the structure of all query results in the postgres database is not in json format.If you need json format data, you can only get the data first and then manually convert it to json format.At present, there should be no crates that will automatically help you Convert the data to json format, because compared to json, it is obviously easier to use to parse the result directly into the structure
Solution A:
postgres-derive/src/fromsql.rs might be the easist way, every field's value came from postgres-types/src/private.rs, we don't need a prepared struct beacuse of all types based on postgres-types/src/type_gen.rs, that's enough for basic usage. And in theory we can get all kinds of oid through postgresql query even it is user defined structure.
Solution B:
SELECT some_compression_algorithm(json_agg(t), compression_level) FROM (
your query here
) t;
the problem is there might have user defined structure, I am confused either...
but...
there is something interesting, tokio-postgres-mapper use the quote crate as a proc-macro to make mapping from postgresql tables to structs, so why we not use quote or something like it to build another crate, even just use it in our project?
I'll try and update my answer in a few days, otherwise I must come back to kotlin & vert.x(just for fun)

Delphi REST - How do I know when the data is all retrieved?

In Delphi Tokyo, I have a series of REST components (the ones shipped with Delphi: RESTClient, RESTRequest, RESTResponse, RESTAdapater) tied together to retrieve REST data.
The REST call, as defined on the server, has pagination set to some value.
As such, within my Delphi app I have to repeatedly update the RESTRequest.
ResourceSuffix to add '?page=' and then a page number.
Since various REST Services may have different pagination, or will have different result row counts.
How do I know when I have retrieved all the data?
Surely there is something more elegant that keep trying until rows retrieved = 0/some error.
I found a solution that works for me... My data is coming from Oracle RDBMS, via ORDS and APEX. The REST content (Specifically RAW data) has a URL in it for the next pagination set. For the FIRST set of data, this URL reference is at the end of the REST data stream. For each subsequent data set, the URL reference is at the beginning of the raw data stream, so you have to check both locations. Here is the code I am using...
function IsMoreRESTDataAvailable(Content: String) : Boolean;
var
Test600 : String;
begin
// This routine takes the RESTResponse.Content, aka the raw REST data, and checks to see if there is a NEXT: string
// in either the end of the data (only available for the FIRST DATA set
// or the beginning of the data
result := False;
Test600 := RightStr(Content, 600);
If AnsiPos('"next":{"$ref":"https://<YOUR_SERVER_HERE>', Test600) <> 0 then
begin
result:= True;
Exit;
end;
// If we didn't find it at the end of the REST content, then check at the beginning
Test600 := LeftStr(Content, 600);
If AnsiPos('"next":{"$ref":"https://<YOUR_SERVER_HERE>', Test600) <> 0 then
begin
result:= True;
Exit;
end;
end;

synchronous queries to MongoDB

is there a way to make synchronous queries to MongoDB?
I'd like to run some code only after I've retrieved all my data from the DB.
Here is a sample snipped.
Code Snippet A
const brandExists = Brands.find({name: trxn.name}).count();
Code Snippet B
if(brandExists == 0){
Brands.insert({
name:trxn.name,
logo:"default.png",
});
Trxs.insert({
userId,
merchant_name,
amt,
});
}
I'd like Code snippet B to run only after Code Snippet A has completed its data retrieval from the DB. How would one go about doing that?
You can use simple async function async function always returns a promise.
const brandExists;
async function brandExist() {
brandExists = Brands.find({
name: trxn.name
}).count();
}
brandExist().then(
// Your Code comes here
if (brandExists == 0) {
Brands.insert({
name: trxn.name,
logo: "default.png",
})
Trxs.insert({
userId,
merchant_name,
amt,
});
});
I don't think using an if statement like the one you have makes sense: the queries are sent after each other; it is possible someone else creates a brand with the same name as the one you are working with between your queries to the database.
MongoDB has something called unique indexes you can use to enforce values being unique. You should be able to use name as a unique index. Then when you insert a new document into the collection, it will fail if there already exists a document with that name.
https://docs.mongodb.com/manual/core/index-unique/
In Meteor, MongoDB queries are synchronous, so it already delivers what you need. No need to make any changes, snippet B code will only run after snippet A code.
When we call a function asynchronous we mean that when that function is called it is non-blocking, which means our program will call the function and keep going, or, not wait for the response we need.
If our function is synchronous, it means that our program will call that function and wait until it's received a response from that function to continue with the rest of the program.
Meteor is based in Node, which is asynchronous by nature, but coding with only asynchronous functions can origin what developers call "callback hell".
On the server side, Meteor decided to go with Fibers, which allows functions to wait for the result, resulting in synchronous-style code.
There's no Fibers in the client side, so every time your client calls a server method, that call will be asynchronous (you'll have to worry about callbacks).
Your code is server-side code, and thanks to Fibers you can be assure that snippet B code will only run after snippet A code.

Squeryl: Run query explicitly

When I create a query in squeryl, it returns a Query[T] object. The query was not yet executed and will be, when I iterate over the Query object (Query[T] extends Iterable[T]).
Around the execution of a query there has to be either a transaction{} or a inTransaction{} block.
I'm just speaking of SELECT queries and transactions wouldn't be necessary, but the squeryl framework needs them.
I'd like to create a query in the model of my application and pass it directly to the view where a view helper in the template iterates over it and presents the data.
This is only possible when putting the transaction{} block in the controller (the controller includes the call of the template, so the template which does the iteration is also inside). It's not possible to put the transaction{} block in the model, because the model doesn't really execute the query.
But in my understanding the transaction has nothing to do with the controller. It's a decision of the model which database framework to use, how to use it and where to use transactions. So I want the transaction{} block to be in the model.
I know that I can - instead of returning the Query[T] instance - call Iterable[T].toList on this Query[T] object and then return the created list. Then the whole query is executed in the model and everything is fine. But I don't like this approach, because all the data requested from the database has to be cached in this list. I'd prefer a way where this data is directly passed to the view. I like the MySql feature of streaming the result set when it's large.
Is there any possibility? Maybe something like a function Query[T].executeNow() which sends the request to the database, is able to close the transaction, but still uses the MySQL streaming feature and receives the rest of the (selected and therefore fixed) result set when it's accessed? Because the result set is fixed in the moment of the query, closing the transaction shouldn't be a problem.
The general problem that I see here is that you try to combine the following two ideas:
lazy computation of data; here: database results
hiding the need for a post-processing action that must be triggered when the computation is done; here: hiding from your controller or view that the database session must be closed
Since your computation is lazy and since you are not obliged to perform it to the very end (here: to iterate over the whole result set), there is no obvious hook that could trigger the post-processing step.
Your suggestion of invoking Query[T].toList does not exhibit this problem, since the computation is performed to the very end, and requesting the last element of the result set can be used as a trigger for closing the session.
That said, the best I could come up with is the following, which is an adaptation of the code inside org.squeryl.dsl.QueryDsl._using:
class IterableQuery[T](val q: Query[T]) extends Iterable[T] {
private var lifeCycleState: Int = 0
private var session: Session = null
private var prevSession: Option[Session] = None
def start() {
assert(lifeCycleState == 0, "Queries may not be restarted.")
lifeCycleState = 1
/* Create a new session for this query. */
session = SessionFactory.newSession
/* Store and unbind a possibly existing session. */
val prevSession = Session.currentSessionOption
if(prevSession != None) prevSession.get.unbindFromCurrentThread
/* Bind newly created session. */
session.bindToCurrentThread
}
def iterator = {
assert(lifeCycleState == 1, "Query is not active.")
q.toStream.iterator
}
def stop() {
assert(lifeCycleState == 1, "Query is not active.")
lifeCycleState = 2
/* Unbind session and close it. */
session.unbindFromCurrentThread
session.close
/* Re-bind previous session, if it existed. */
if(prevSession != None) prevSession.get.bindToCurrentThread
}
}
Clients can use the query wrapper as follows:
var manualIt = new IterableQuery(booksQuery)
manualIt.start()
manualIt.foreach(println)
manualIt.stop()
// manualIt.foreach(println) /* Fails, as expected */
manualIt = new IterableQuery(booksQuery) /* Queries can be reused */
manualIt.start()
manualIt.foreach(b => println("Book: " + b))
manualIt.stop()
The invocation of manualIt.start() could already be done when the object is created, i.e., inside the constructor of IterableQuery, or before the object is passed to the controller.
However, working with resources (files, database connections, etc.) in such a way is very fragile, because the post-processing is not triggered in case of exceptions. If you look at the implementation of org.squeryl.dsl.QueryDsl._using you will see a couple of try ... finally blocks that are missing from IterableQuery.