Is there an option to do getBulk from cache with spring cache - memcached

I'm using Spring's caching abstraction in my application and the underlying cache is memcached.
memcached supports bulk lookup from cache by providing a collection of keys. see getBulk() javadoc here
However spring cache interface doesn't allow bulk lookup ? Is there a specific reason or are there ways to perform this?

The cache API is a base model to help us provide declarative caching, it is not meant to abstract every possible use of caching.
Each cache implementation has a getNativeCache that returns the underlying library implementation. If you need access to a library specific feature, that's what you should use.

Related

Shared library vs REST service. What are the pros and cons?

There is:
a requirement to have a key-values pairs storage shared between multiple services
a simple table in DynamoDB
very simple logic of key-value pairs creation
Intuitively I want to put the DynamoDB table behind a REST service that will implement all the simple logic I have. Unfortunately, this means adding a lot of reliability and performance challenges to the solution, since making my service as good, resilient, and performant as DynamoDB isn't easy.
It's been a while now that I think about creating a shared library for the purpose. The library will implement the logic and connect directly to DynamoDB table. I don't anticipate a lot of changes neither in DynamoDB table, nor in logic that will be implemented in the library.
What are the possible pros and cons of both approaches?
A service is simply a packaging and deployment selection for a library. Both are absolutely valid depending on your particular needs.
I'm curious why you feel the need to wrap dynamodb at all? Is there some particular domain logic you would like to place on top of it to constrain it? DynamoDB is already a restful service... Putting your own restful service on top of it may be advantageous, but you would have to convince me of the value of doing so. If you have particular business logic that requires you constraining the functionality, packaging it as a shared library has certain advantages, especially if you can encapsulate that business logic and separate it from the implementation of DynamoDB.
I am assuming that updating the shared library will not be in your control. And clients (library users) will update whenever it suits them.
If the above assumption is true, You should always go with a rest service. Considering few things
Your rest api may use cache instead of calling dynamodb all the time.
You might want to update the schema of the data you want to put in dynamodb.
You may use another db all together.
You may have some validation logic which will certainly evolve over time.

Does JpaTokenStore have any downsides when compared to JdbcTokenStore for spring security oauth

I currently use Jpa via Hibernate in my application. Since spring security oauth2 provides JdbcTokenStore, I started using it. But the problem with that is, I cannot use cache (which all my entities in the application currently share).
It hits the database in a separate flow.
I am thinking implementing JpaTokenStore thats backed by Jpa & leverage the cache advantages that comes with it.
Did anyone try implementing this/see any downsides using this approach?
In one project I've implmented org.springframework.security.oauth2.client.token.ClientTokenServices with JPA and didn't notice any problems. I was able to use all standard features of JPA including #Transactional for JPAClientTokenServices#saveAccessToken
There is nothing stopping you from doing it, and plenty of people do use JPA for all sorts of things, but IMO JPA is not ideal for handling storage of identity data. JPA is designed and optimized for cacheing data for the duration of a JDBC connection (a transaction basically), while identity data have a typically different and much longer lifetime. If you store long lived data using JPA, you have to deal with the consequences of what happens when you access it outside its normal lifetime, e.g. use DTOs, which ends up negating the benefits of using it in the first place to some extent.

Document-oriented data abstraction layer for NoSQL (Mongo)?

I'm building a server application that interfaces the world via a RESTful web service and uses MongoDB of storage. As it happens, the JSON resources of the web service are fairly close to the structure of the BSON documents stored in Mongo.
While I typically use an object-oriented DAO abstraction to hide the details of persistence implementation, it doesn't quite seem to be the best fit in this case since what I really want to do is fetch a document from the DB based on a query and perform a transformation. Building an object graph as an intermediary seems excessive.
Does anyone have any recommendations for an abstraction pattern that fits this bill?
Edit: Removed 1AM degression about just not using any abstraction and just using the Mongo driver directly.
The level of abstraction is up to you, your needs and your requirements. There are various language specific layers on top of the native MongoDB drivers. It is up to you decide what you need and not to us. We can not give recommendation without more precise and detailed background. If you ask a generic question then you will receive a generic question.
And who gave this +1? For what?

How can I setup OData and EF with out coupling to my database structure?

I really like OData (WCF Data Services). In past projects I have coded up so many Web-Services just to allow different ways to read my data.
OData gives great flexibility for the clients to have the data as they need it.
However, in a discussion today, a co-worker pointed out that how we are doing OData is little more than giving the client application a connection to the database.
Here is how we are setting up our WCF Data Service (Note: this is the traditional way)
Create an Entity Framework (E)F Data Model of our database
Publish that model with WCF Data Services
Add Security to the OData feed
(This is where it is better than a direct connection to the SQL Server)
My co-worker (correctly) pointed out that all our clients will be coupled to the database now. (If a table or column is refactored then the clients will have to change too)
EF offers a bit of flexibility on how your data is presented and could be used to hide some minor database changes that don't affect the client apps. But I have found it to be quite limited. (See this post for an example) I have found that the POCO templates (while nice for allowing separation of the model and the entities) also does not offer very much flexibility.
So, the question: What do I tell my co-worker? How do I setup my WCF Data Services so they are using business oriented contracts (like they would be if every read operation used a standard WCF Soap based service)?
Just to be clear, let me ask this a different way. How can I decouple EF from WCF Data Services. I am fine to make up my own contracts and use AutoMapper to convert between them. But I would like to not go directly from EF to OData.
NOTE: I still want to use EF as my ORM. Rolling my own ORM is not really a solution...
If you use your custom classes instead of using classes generated directly by EF you will also change a provide for WCF Data Services. It means you will no more pass EF context as generic parameter to DataService base class. This will be OK if you have read only services but once you expect any data modifications from clients you will have a lot of work to do.
Data services based on EF context supports data modifications. All other data services use reflection provider which is read only by default until you implement IUpdatable on your custom "service context class".
Data services are technology for creating quickly services exposing your data. They are coupled with their context and it is responsibility of the context to provide abstraction. If you want to make quick and easy services you are dependent on features supported by EF mapping. You can make some abstractions in EDMX, you can make projections (DefiningQuery, QueryView) etc. but all these features have some limitations (for example projections are readonly unless you use stored procedures for modifications).
Data services are not the same as providing connection to database. There is one very big difference - connection to database will ensure only access and execution permissions but it will not ensure data security. WCF Data Services offer data security because you can create interceptors which will add filters to queries to retrieve only data the user is allowed to see or check if he is allowed to modify the data. That is the difference you can tell your colleague.
In case of abstraction - do you want a quick easy solution or not? You can inject abstraction layer between service and ORM but you need to implement mentioned method and you have to test it.
Most simple approach:
DO NOT PUBLISH YOUR TABLES ;)
Make a separate schema
Add views to this
Put those views to EF and publish them.
The views are decoupled from the tables and thus can be simplified and refactored separately.
Standard approach, also for reporting.
Apart from achieving more granular data authorisation (based of certain field values etc) OData also allows your data to be accessible via open standards like JSON/Xml over Http using OAuth. This is very useful for the web/mobile applications. Now you could create a web service to expose your data but that will warrant a change every time your client needs change in the data requirements (e.g. extra fields needed) whereas OData allows this via OData queries. In a big enterprise this is also useful for designing security at infrastructure level as it will only allow the text based (http) calls which can be inspected/verified for security threats via network firewalls'.
You have some other options for your OData client. Have a look at Simple.OData.Client, described in this article: http://www.codeproject.com/Articles/686240/reasons-to-consume-OData-feeds-using-Simple-ODa
And in case you are familiar with Simple.Data microORM, there is an OData adapter for it:
https://github.com/simplefx/Simple.OData/wiki
UPDATE. My recommendations go for client choice while your question is about setting up your server side. Then of course they are not what you are asking. I will leave however my answer so you aware of client alternatives.

Where is the best place to store constraints in an api using mongodb

I'm writing an API which uses MongoDB as the storage backend. Let's say the API allows a consumer to query for upcoming events. Let's say some events are private, and for the current user, should not come up in the results.
Should I:
Implement this at the API level. The API code, will be responsible for these checks. The advantage seems to be that if I change storage engines (unlikely), the business code will be intact.
Implement this as a stored javascript function.
At the API level, for the reason you mentioned: you're independent of the underlying storage mechanism of your application.
A good guideline is Persistence Ignorance: make your business logic as little aware of the storage mechanism as possible. This implies that business logic shouldn't be located in your storage layer either. So stored procedures or stored JavaScript functions shouldn't contain business logic. Some advantages of this:
You can swap the underlying database with minimal effort and without having to re-implement your business logic in the new database.
All of your business logic is contained inside your application layer, making the code base easier to understand and debug.
The only functions you should store in MongoDB are 'utility' functions; functions that simplify common operations, such as string operations, but are not tied to your business logic in any way.