ContentResolver usage - android-contentresolver

I am new to andriod domain and is in learning phase. I got couple of queries:
Do we have single ContentResolver object per app?
Is it a singleton object?
Who manages this object lifecycle?
If it's singleton, how does it handles multiple request of querying a ContentProvider?

From Alex Lockwood's Blog -
http://www.androiddesignpatterns.com/2012/06/content-resolvers-and-content-providers.html
What is the Content Resolver?
The Content Resolver is the single, global instance in your
application that provides access to your (and other applications')
content providers. The Content Resolver behaves exactly as its name
implies: it accepts requests from clients, and resolves these requests
by directing them to the content provider with a distinct authority.
To do this, the Content Resolver stores a mapping from authorities to
Content Providers. This design is important, as it allows a simple and
secure means of accessing other applications' Content Providers.
The Content Resolver includes the CRUD (create, read, update, delete)
methods corresponding to the abstract methods (insert, delete, query,
update) in the Content Provider class. The Content Resolver does not
know the implementation of the Content Providers it is interacting
with (nor does it need to know); each method is passed an URI that
specifies the Content Provider to interact with.
What is a Content Provider?
Whereas the Content Resolver provides an abstraction from the
application's Content Providers, Content Providers provides an
abstraction from the underlying data source (i.e. a SQLite database).
They provide mechanisms for defining data security (i.e. by enforcing
read/write permissions) and offer a standard interface that connects
data in one process with code running in another process.
Content Providers provide an interface for publishing and consuming
data, based around a simple URI addressing model using the content://
schema. They enable you to decouble your application layers from the
underlying data layers, making your application data-source agnostic
by abstracting the underlying data source.
The Life of a Query
So what exactly is the step-by-step process behind a simple query? As
described above, when you query data from your database via the
content provider, you don't communicate with the provider directly.
Instead, you use the Content Resolver object to communicate with the
provider. The specific sequence of events that occurs when a query is
made is given below:
A call to getContentResolver().query(Uri, String, String, String, String) is made. The call invokes the Content Resolver's query
method, not the ContentProvider's.
When the query method is invoked, the Content Resolver parses the uri argument and extracts its authority.
The Content Resolver directs the request to the content provider registered with the (unique) authority. This is done by calling the
Content Provider's query method.
When the Content Provider's query method is invoked, the query is performed and a Cursor is returned (or an exception is thrown). The
resulting behavior depends entirely on the Content Provider's
implementation.

Related

REST on non-CRUD operations

I have a resource called “subscriptions”
I need to update a subscription’s send date. When a request is sent to my endpoint, my server will call a third-party system to update the passed subscription.
“subscriptions” have other types of updates. For instance, you can change a subscription’s frequency. This operation also involves calling a third-party system from my server.
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
PATCH subscriptions/:id
I can hypothetically use my controller behind the endpoint to fire different functions depending on the query string... But what if I need to add a third or fourth “update” type action? Should they ALL run through this single PATCH route?
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
No - but you will often want to.
Consider how you would support this on the web: you might have a number of different HTML forms, each accepting a slightly different set of inputs from the user. When the form is submitted, the browser will use the input controls and form metadata to construct an HTTP (POST) request. The target URI of the request is copied from the form action.
So your question is analogous to: should we use the same action for all of our different forms?
And the answer is yes, if you want the general purpose HTTP application to understand which resource is expected to change in response to the message. One reason that you might want that is cache invalidation; using the right target URI allows all of the caches to understand which previously cached responses should not be reused.
Is that choice free? no - it adds some ambiguity to your access logs, and routing the request to the appropriate handler in your code takes a bit more work.
Trying to use PATCH with different target URI is a little bit weird, and suggests that maybe you are trying to stretch PATCH beyond the standard constraints.
PATCH (and PUT) have remote authoring semantics; what they mean is "make your copy of the target resource look like my copy". These are methods we would use if we were trying to fix a spelling error on a web page.
Trying to change the representation of one resource by sending a remote authoring request to a different resource makes it harder for the general purpose HTTP application components to add value. You are coloring outside of the lines, and that means accepting the liability if anything goes wrong because you are using standardized messages in a non standard way.
That said, it is reasonable to have many different resources that present representations of the same domain entity. Instead of putting everything you know about a user into one web page, you can spread it out among several that are linked together.
You might have, for example, a web page for an invoice, and then another web page for shipping information, and another web page for billing information. You now have a resource model with clearer separation of concerns, and can combine the standardized meanings of PUT/PATCH with this resource model to further your business goals.
We can create as many resources as we need (in the web level; at the REST level) to get a job done. -- Webber, 2011
So, in your example, would I do one endpoint like this user/:id/invoice/:id and then another like this user/:id/billing/:id
Resources, not endpoints.
GET /invoice/12345
GET /invoice/12345/shipping-address
GET /invoice/12345/billing-address
Or
GET /invoice/12345
GET /shipping-address/12345
GET /billing-address/12345
The spelling conventions that you use for resource identifiers don't actually matter very much.
So if it makes life easier for you to stick all of these into a hierarchy that includes both users and invoices, that's also fine.

Clean architecture and user logins in Flutter - how do I store user information?

I've been trying to use Reso Coder's Flutter adaptation of Uncle Bob's Clean Architecture.
My app connects to an API, and most requests (other than logging in) require an authentication token.
Furthermore, upon logging in, user profile information (like a name and profile picture) is received.
I need a way to save this data upon login, and use it in both future API requests and my app's UI.
As I'm new to Uncle Bob's Clean Architecture, I'm not quite sure where this data belongs. Here are the ideas I've come up with, all involving storing the data in a User object:
Store the User in the repository layer, in an authentication feature directory. Other repository-level methods can pass it to the appropriate datasource methods.
This seems to make the most sense; other repository-level methods that call other API calls can use the stored User easily, passing it to methods in the data source layer.
If this is the way to go, I'm not quite sure how other features (that use the API) would access the User - is it okay to have a repository depend on another, and pass the authentication repository to the new feature repository?
Store the User in the repository layer, in an authentication feature directory. Other (non-login) usecases can depend on both this repository and on one relevant to their own feature, passing the User to their repository methods.
This is also breaking the vertical feature barrier, but it may be cleaner then idea 1.
For both these ideas, here's what my repository looks like:
abstract class AuthenticationRepository {
/// The current user.
User get currentUser;
/// True if logged in.
bool get loggedIn;
/// Logs in, saving the [User].
Future<void> login(AuthenticationParams params);
/// Logs out, disposing of the [User].
Future<void> logout();
/// Same as [logout], but logs out of all devices.
Future<void> logoutAll();
/// Retrieves stored login credentials.
Future<AuthenticationParams> retrieveStoredCredentials();
}
Are these ideas "valid", and are there any better ways of doing it?
I see another option to tackle the problem. The solution I want to talk about comes from the domain-driven design and is an event based approach.
In DDD you have the concept of a bounded context. A business object (uncle bob's entity) can have different meanings in different bounded contexts. Take a look at your user business object. The data and methods that some use case uses is often differnt to the data and methods that other use cases use. That's why you have differnt user objects in differnt bounded contexts. They are a kind of perspective that each use case has on the same business object.
If a business object is modified in one bounded context it can emit a business event. Another feature can listen to those events. The event mechanism can either be a simple observer pattern or if you need to distribute your application features via microservices a message queue. In case you use a simple observer patter the event emitter and event handler can run within the same data source transaction. But they can also run in differnt ones. It depends on your needs.
So when the sign-up use case registers a new user it emits a UserSignedUpEvent. Other features can now listen to this event. The event carries the information of the user, like the email, the name, the profile image and other infomration that the user provided during sign-up. Other features can now save the piece of data they need to their own data source. It can be the same as the sign-up use case uses (just other tables or another schema). But it is also possible that is a completely differnt data source, maybe another kind of data source like a nosql db. The part I wrote above about transactions is of course more difficult if you have different data sources.
The main point is that each feature has it's own data and manages it. It might be a copy of the whole user infromation, but in a lot of cases it is only a subset.
The event-based approach can give you perfect modularization. But as it is always when something looks great, it comes at a cost. You have to duplicate some part or even all data. When you think of a microservice architecture and some features are in different microservices it means that the duplication increases the availability of the service. The service can operate even the main service that manages the data is down, because a local copy exists. But now you have to deal with consistency issues - eventual consistency.
At this point I like to stop and guide you to other sources for details:
Chapter 8: Domain Events, Implementing Domain-Driven Desing, Vaughn Vernon
The many meanings of event-driven architectures, Martin Fowler

RESTful GET that can change system state?

I am building a service that caches short lived data objects. The object creation process is expensive, so this service will cache them and other downstream applications can use them without managing their lifecycle.
The plan is that downstream apps will make a GET call to this service to fetch object. If the object is expired, the service will fetch a new object, cache it, and return it to the caller.
And Here is my dilemma - This way the GET operation changes system state, by fetching new object. I am sure that I am violating REST principles here, or is there a valid justification for this? Should I just change the method to POST?
This way the GET operation changes system state, by fetching new object. I am sure that I am violating REST principles here, or is there a valid justification for this? Should I just change the method to POST?
The short version: this is fine.
Longer version: REST says that our resources have common "uniform" semantics - the meaning of messages doesn't depend on which resource you reference.
In the case of HTTP, the primary discriminator for requests is the method. For the GET method, the semantics are (currently) described by RFC 7231. GET is explicitly identified as being safe
Request methods are considered "safe" if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource.
If you, the server, need to change a bunch of your private information stores to compute the current representation of the resource, that's an implementation detail hidden behind the HTTP facade. You can do what you like.
Fundamentally, what safe means is that anybody who knows the identifier can ask for the current representation of the resource at any time. This allows browser to retry requests when the network is flaky, or for spiders to crawl around indexing the net, knowing that their requests do no harm (or more precisely, that the fault of any harms inflicted by those requests is properly assigned to the server).
If that's OK, then GET is a perfectly "RESTful" method to use for these requests.

RESTful API - how to include id/token/... in each request?

I developed a mobile app that needs to access and update data on a server. I'd like to include e.g. the device ID and some token in each request.
I am including these in the body at the moment, so I have only POST requests, even when asking to read data from the server. However, a request to read data should be GET, but how do I include these pieces of information? Should I just add a body to a GET request? Should I rather add some headers? If so, can I just create any custom headers with any name? Thank you for your guidance.
Your FCM token and device id are really authentication credentials for the request. In HTTP, you typically use the Authorization header with a scheme to indicate to the service
In your case, you could use bearer tokens in the HTTP Authorization header.
While bearer tokens are often used with JWT token, they are not required to be that specific format.
You could just concatenate the FCM token and the device id like the basic authentication scheme does.
BTW, it's not recommended to use a body on a GET request since some proxies may not retain that.
Well, REST is basically just a generalization of the concepts used already for years in the browser-based web. On applying these concepts consistently in your applications you'll gain freedom to evolve the server side while gaining robustness to changes on the clientside. However, in order to benefit from such strong properties a certain number of constraints need to be followed consequently like adhering to the rules of the underlying transport protocol or relying on HATEOAS to drive application state further. Any out-of-band information needed to interact with the service will lead to a coupling and therefore has the potential to either break clients or prevent servers from changing in future.
A common misconception in REST achitecture design is that URIs should be meaningful and express semantics to the client. However, in a REST architecture the URI is just a pointer to a resource which a client should never parse. The decision whether to invoke the URI should soly be based on the accompanying link relation name which may further be described in either the media-type or common standards. I.e. on a pageable collection link relation like prev, next, first or last may give a client the option to page through the collection. The actual structure of the URI is therefore not important to REST at all. Over-engineered URIs might further lead to typed resources. Therefore I don't like the term restful-url actually. How do non-restful-urls look like then?
While sending everything via POST requests is technically a valid option, it also has some drawbacks to consider though. IANA maintains a list of available HTTP methods you might use. Each method conveys different promisses and semantics. I.e. a client invoking a GET operation on a server should be safe to assume that invoking the resource does not cause any state changes (safe) and in case of network issues the request can be reissued again without any further considerations (idempotent). These are very important benefits to i.e. Web crawlers. Besides that intermediary nodes can determine based on the request method and the resulting response if the response can be cached or not. While this is not necessarily an issue in terms of decoupling clients from servers, it helps to take away unnecessary workload from the server itself, especially when resource state is rarly changing, improving the scalability of the whole system.
POST on the otherhand does not convey such properties. On sending a POST request for retrieving data the client can't be sure if the request actually lead to changes on the resources state or not. On a network issue the request might have reached the server and may have created a new resource though the response just got lost mid way which might keep the client in a state of uncertainty whether it simply can resend the request or not. Also, responses for POST operations are not cacheable by default, only after explicitely adding frehness information to it. A POST method invocation requests the target resource to process the provided representation accoding to the resources own semantics. As literally anything can be sent to the server it is of importance that the server teaches the client on how a request should look like. In HTML i.e. this is done via Web forms where a user can fill in data into certain input fields and then send the data to the server on clicking a submit button. The same concept could be applied for mobile or REST applications as well. Either reusing HTML forms or defining an own application/vnd.company-x.forms+json where the description of that media type is made public (or registered with IANA) can help you on this.
The actual question on where to include certain data is, unfortunately, to generic to give a short answer. It further depends whether data should be shareable or has some security related concerns. While parameters might be passed to the server via URL parameters (query, matrix, path) to a certain extent, it is probably not the best option in general eventhough query parameters are encrypted in SSL interactions. This option, though, is convenient if the URI should be pastable without losing information. This of course then shouldn't contain security related data then. Security related information should almost always be passed in HTTP headers or at least the actual payload itself.
Usually you shoud distinguish between content and meta-data describing the content. While the content should be the actual payload of the request/response, any meta-data describing the content should go inside the headers. Think of an image you want to transfer. As you don't want to mess with the bytes of the image you simply append the image name, the compression format and further properties describing how to convert the bytes back to an image representation within the headers. This discrimination works probably best for standardized representation formats as you need to be within the capabilities of the spec to guarantee interoperability. Though, even there things may start to get fuzzy. I.e in the area of EDI there exist a couple of well-defined standards like Edifact, Tradacoms, and so forth which can be used to exchange different message formats like invoices, orders, order responses, ... though different ERP systems speak different slangs and this is where things start to get complicated and messy.
If you are in control of your representation format as you probably did not standardize it or defined it only vaguely yet things might even be harder to determine whether to put it insight your document or append it via headers. Here it solely depends on your design. I have also seen representations that defined own header sections within the payload and therefore recreated a SOAP like envelop-header-body structure.
About your question if you can create custom header for your requirement. My answer is YES.
As above was mentioned, you can use the standard Authorization header to send the token in each request . Other alternative is defining a custom header. However you will have to implement by server side a logic to support that custom header .
You can read more about it here

Should An Application Service Be Injected Into A Domain Service

I am working on a WinForms application using Entity Framework 6 with the following layers:
Presentation
Application
Domain
Infrastructure
When a user clicks the save button from the UI, it calls an application service in the Application layer and passes in the request. The application service then calls a domain service with the request. The domain service calls on an several entities within a domain model to perform validations on data used in the request.
One or more validations in the domain model require information from a repository to determine if data in the request received from the Presentation Layer conforms to certain business rules.
I am considering two options to address this.
Have the Application Service retrieve the information needed from
the repositories for validation and pass those values into the
Domain Service which will call on the domain model and entities to
validate the incoming request for rules and values. Then let the
Application Service save the request when the Domain Service has
finished its validations, which will result in returning control
back to the Application Service which was synchronously waiting for
completion of validations. If I do this, then the domain layer will
have no direct or indirect (injected) reference to the repositories.
Unit testing of the Domain Service will be easier if I do this
because nothing is injected into it to perform validations.
Everything it needs is already passed in. The drawback is that some
business knowledge is put into the Application Service because now
it needs to know which repository information to retrieve for
validations of a request.
When calling the domain service for validation of the request,
inject an instance of the Application Service into it. The Domain
Service can then get information from the repository using the
injected Application Service, whose service contract is defined in
the Domain Layer. Once all the information is available it is passed
as needed to various entities to validate rules and values. Once
validation is completed, the Domain Service saves the request using
the injected Application Service. When the Domain Service is done
and exits, it returns the status of the save operation to the
Application Service which has been waiting for validation to
complete. The outer waiting application service can then return the
results of the save to the UI. One concern I have here is that when
unit testing the Domain Service I will have to mock the injected
Application Service.
Which option or other course of action would work out better? Thanks in advance.
"Should An Application Service Be Injected Into A Domain Service"
No, never!
Resolving the data from the application service and passing it to the domain service is usually fine, but if you think that domain logic is leaking then
you can apply the Interface Segregation Principle (ISP) and define an interface in the domain based on what contract is required to query for the "wanted data". Implement that interface on your repository or any other object that can fulfill the task and inject it into your domain service.
E.g. (pseudo-code)
//domain
public interface WantedDataProvider {
public WantedData findWantedData(...) {}
}
public class SomeDomainService {
WantedDataProvider wantedDataProvider;
}
//infrastructure
public class SomeRepository implements WantedDataProvider {
public WantedData findWantedData(...) {
//implementation
}
}
EDIT:
I have a request aggregate root with an employee name. One rule is the
employee must be full time and not a contractor
If the information to perform the validation already exists on an aggregate, you may also use this AR as a factory for other ARs. Assuming an employee holds it's contract type...
Employee employee = employeeRepository.findById(employeeId);
Request request = employee.submitRequest(requestDetails); //throws if not full time
requestRepository.add(request);
Note that the invariant can only be made eventually consistent here, unless you change your aggregate boundaries, but it's the same with the other solutions.
When a significant process or transformation in the domain is not a natural responsibility of an Entity or Value Object, add an operation to the model as a standalone interface declared as a Service. Define the interface in terms of the language of the model and make sure the operation name is part of the Ubiquitous Language. Make the Service stateless - Evans, the Blue Book
Don't lean to heavily toward modelling a domain concept as a Service. Do so only if the circumstances fit. If we aren't careful,, we might start to treat Services as our modelling "silver bullet". Using Services overzealously will usually result in the negative consequences of creating an Anemnic Domain Model, where all the domain logic resides in Services. - Vernon, the Red Book
What I am trying to tell by extensive citing, is that you seems to see Domain Services as something you MUST have where you absolutely don't have to do it. Your application services can happily use repositories to fetch the aggregate and then call the aggregate root method to perform necessary operations. Your aggregate root is responsible to protect its own invariants by executing necessary validations inside it and throw validation exception if anything should go wrong.
In case you absolutely must use a domain service, if you are doing it right, meaning you implement onion architecture, where inner layers do not know about outer layers, your domain service won't be able to know about the application service. However, you can, if you absolutely have to, send a delegate into the domain service to do something that the application service desires. However, this scenario is too complex to be true. Seeing that you only require the validation, please read the cited parts of the blue and red books and make the right decision. Again, there is no mandatory domain service in DDD, this is one of common misconceptions.