Where to store database connection using Slick standalone - scala

I found many examples online that put a call to Database.forConfig inside a trait, and each repository extends this trait. Some examples:
https://github.com/BBartosz/akkaRestApi/blob/master/src/main/scala/utils/DatabaseConfig.scala
https://github.com/Platoonhead/SlickWithScala/blob/master/src/main/scala/com/edu/knoldus/connection/ConnectedDbMysql.scala
https://github.com/cdiniz/slick-akka-http/blob/master/src/main/scala/utils/PersistenceModule.scala
Will it lead to creating too many instances of the DB client object, memory overhead, any other performance problems, when having many repositories?
Isn't it better to have one object that will call Database.forConfig and will have a link to the database?
What is the best practice here?

Here is an example of how I did it:
https://github.com/joesan/plant-simulator/blob/master/app/com/inland24/plantsim/config/AppConfig.scala
So what I basically do is that, I create a single copy of the variable that will call the Slick API and then I specify the number of threads (effectively the number of connections) that I want in the connection pool.
http://slick.lightbend.com/doc/3.0.0/database.html

Related

Flutter Firebase Database Class in multiple files

I have a big project and manage a lot of data with firebase. For this I have a class "MyFirestoreDatabase" in which I have every single firebase function, which I then call from my providers.
The problem is, that the MyFirestoreDatabase class has gotten really really big and I would want to split it up into sub classes and different files.
Every time I call a firebase function I use MyFirestoreDatabase.instance.functionName(),
so I don't think I want different classes, because then I would have multiple instances of the database open at the same time right?
Would it work to extend the class?
Calling FirebaseFirestore.instance always returns the same (default) instance, no matter how many times you call it. This is the essence of the singleton pattern.
So calling it in each separate class won't make any change in resource consumption, nor in the number of connections to the backend servers.

Is it normal to detach persistent objects when they do not need to be modified?

I have a spring-boot app that exposes a REST api. I am using the repository pattern for ORM.
My question is about the proper way to handle persistent objects when I do not need them to be persistent.
Is detaching them good practice?
For example in one situation I might query the UserRepository and then query the BlahBlahBlahRepository, and then do some calculations on the objects and return a result.
However, I do not need those objects to persist and be monitored because they will not be changed. Do I need to be concerned about the overhead or is there something I can do besides calling detach on those objects.

Issue Insert/Update EF Core DbContext in Azure QueueTrigger Function (Multi-threading)

I´m getting PK Violation Exception when using EF Core 2.1 DbContext in an Azure QueueTrigger function. Guess is due to the nature of DbContext not being thread-safe, and the Azure Function running different instances in parallel. I have read quite a few, but I can´t find a good approach to solve this.
Here is my scenario (producer-consumer pattern):
I have a Scheduled Azure Function that is calling an API to get Projects from different external systems. To get all the required info for a project, I need to run different Queries to other external services, so I´m decoupling this to another Azure function, so the Scheduled function just queues a message per Project, as “Sync Project ID 101”.
Another QueueTrigger Function fires every time a message is queued, so, it means different instances running in parallel. This function must gather all the data of a specific Project, and that means more calls to other external services / APIs, to (some kind of) aggregate all the info about a Project. IMHO it´s good to do it that way, as I can process multiple Projects in parallel, and I can scale the Function if I need it.
Once I have all this Project info, I want to persist it in a SQL DB using EF Core (and here comes the issue)
Project data includes Users in the Project, and each user have a specific GUID as PK (coming from the external system). That means I can have repeated Users IDs in different Function instances, and here is the problem, as when I try to persist User info in a SQL Table, I can get PK Duplication exception, as multiple Function instances can try to Insert the same User at the same time (when the instance A check if user exists, it gets False, but another instance B is actually adding this User, so when instance A tries the Insert, it fails).
Guess I can lock DbContext somehow, but not sure if is good, as I also have a website doing Queries to the SQL DB (read-only queries for now, but could be updates in future too).
Another idea could be to send the entire Project info to another Queue / Blob file, and have another function in Singleton mode that Insert the data into SQL.
I´ve created this project simplifying my scenario, but enough to reproduce the issue and understand the problem.
https://github.com/luismanez/queuetrigger-efcore-multithreading
Any other ideas or recommended approaches? (open to change the architecture if find something better)
Many thanks!
A "more easy" way could be to do some kind of upsert in the database. There is a sample of how to do that with EF Core: https://www.flexlabs.org/2018/02/adding-upsert-support-for-entity-framework-core

Why use interfaces

I see the benefit of interfaces, to be able to add new implementations via contract.
I dont see following problem:
Imagine you have interface DB with method "startTransaction".
Everything is fine you implement it in MySQL, PostgreSQL. But tomorrow you move to mongodb - then you have no transaction support.
What do you do?
1) Empty method - bad because you think you have transactions but u havent
2) Create your own - then you should have some parameters that will be different that regular "startTransaction" method.
And on top of that sometimes simple interfaces just doesnt work.
Example: You need additional parameters for different implementations.
If you're exposing the concept of transactions on your interface, then you must functionally support transactions no matter what, since users of the interface will logically depend on it. I.e., if a caller can start a transaction, then they expect to also be able to roll back a transaction of several queries. Since Mongo doesn't natively have any concept of rolling back transactions, there's one of two possibilities:
You implement the possibility of rolling back queries in code, emulating the functionality of transactions for a database which doesn't natively support it. (Whether that's even reliably possible in Mongo is a debatable topic.)
Your interface is working at the wrong level of abstraction. If your interface is promising functionality an implementation can't deliver, then either the interface or the implementation is unrealistic.
In practice, Mongo and SQL databases are such different beasts that you would either never make this kind of change without changing large parts of your business logic around it; or you specify your interface using an extremely minimal common-denominator interface only, most certainly not exposing technology-specific concepts on an abstract interface.
You are mostly correct, interfaces can be very useful, but also problematic in (fast) changing code, a best practice conserning interfaces is to keep them as small as possible.
When something can handle an transaction, create an interface only for handling an transaction. Split them up in as small as logically possible parts, in that way, when new classes emerge, you can assign them the specific interfaces that can determine their methods.
For the multiple parameter problem, this can indeed be problematic, see if you can determine if this specific value could be moved to a constructor, or indicates that the action that you are doing is indeed sightly different from the action that does not need this parameter.
I hope this helps, goodluck
You are right interfaces are used to add new implementations via contract but those implementations have to posses some similarity.
Let's take an example:
You cannot implement dog using human interface because dog is a living organism.
Same thing you are trying to do here.You are trying to implement a non-sql db using sql db implementation.

how to get all instances of a given class/trait with scala reflect? all refs to a given instance?

I know it's possible to get the members of a class, and of a given instance, but why is it hard to get all instances of a given class? Doesn't the JVM keep track of the instances of a class? This doesn't work in Java:
myInstance.getClass.getInstances()
Is this possible with the new scala reflect library? Are there possible workarounds?
Searched through the reflection scaladoc, on SO and google, but strangely couldn't find any info on this very obvious question.
I want to experiment/hack a hypergraph-database, inspired by hypergraphDB, querying the object graph directly, set aside serialization.
Furthermore, I'd need access to all references to a given object. Now this information certainly is there (GC), but is it accessible by reflection?
thanks
EDIT: this appears to be possible at least by "debugging" the JVM from another JVM, using com.sun.jdi.ReferenceType.instances
"Keeping track" of all instances of a class is hardly desirable, at least not by default. There's considerable cost to doing so and the mechanism must avoid hard references that would prevent reclaiming otherwise unreferenced instances. That means using one of the reference types and all the associated machinery involved.
Garbage Collection does not need to be class-aware. It only cares about whether instances are reachable or not.
That said, you can write code to track instantiations on a class-by-class basis. You'd have to use one of the reference classes in java.lang.ref to track them.