How to share the wayland display connection between unrelated code parts - wayland

I am currently writing a library that can be used by a wayland client software. The library is intended to be mostly independent of the client toolkit (currently only Qt, but other wayland-enabled toolkits should be able to use it as well). It requires only a wl_display pointer passed to the initialization routine, which is retrieved from the GUI toolkit. After the initialization is done, the library should be able to work without contact to the toolkit to make it cross-toolkit compatible.
The problem arises when my library requires a couple of global object proxies (ie. wl_output). The library uses a custom wl_registry to bind custom proxies to required global objects. However, from the server's perspective all proxies to these objects are equivalent. When events that contain object proxies are sent by the server, they may contain either toolkit's or library's proxy reference. This leads to problems because there is no easy way to differentiate these. When the toolkit receives such events, it blindly assumes that the user data of the proxy belongs to the toolkit and uses it, even if the proxy it received belongs to my library.
Is there any way to reconcile such unrelated code, or is such use beyond the scope of the wayland library/protocol and I should re-design my solution?

Qt Wayland developer here.
When the toolkit receives such events, it blindly assumes that the user data of the proxy belongs to the toolkit and uses it, even if the proxy it received belongs to my library.
Are you sure about this part? When you bind to the global, you create a new proxy object, I don't see how the toolkit can know about this... Or are you talking about the arguments in events sent by globals. I.e. wl_pointer.set_cursor and the like? Would be nice if you could be more specific about what goes wrong...

Related

What is best practice to communicate between React components and services?

Instead of using flux/redux architecture, how react components should communicate with services?
For example:
There is a container having few representational (react) components:
ChatBox - enables to read/write messages
AvatarBox with password changer - which enables to change user's password
News stream - lists news and apply filter to them
Thinking of them as resources representation, I want each of them to access Microservice API by itself (getting or updating data). Is this correct?
It will provide clean responsibility managing models, but it gives performance doubts using http requests to load each component's content
This question also reffers to: How to execute efficient communication for multiple (micro)services?
When you opt for not using Flux/Redux, here is what you do:
Create an outer component that should wrap all the other components. This component is also known as a higher order component or a controller view. This component should use an HTTP library to communicate with your microservices (I personally like Axios). I would recommend that you create a client API object that wraps Axios. Your higher order component can reference this client API so it is agnostic of the HTTP library and whatnots. I would also put a reference of this client API on the window object in dev mode so you can do window.clientApi.fetchSomething() in the Chrome console and make debugging easier.
Make all the other components (ChatBox, AvatarBox and NewsStream) controlled. If you are not familiar with this concept, it means they receive everything they need through props and they avoid keeping state. These components should not call the microservices themselves. This is responsability of the higher order component. In order to be interactive, these components should receive event handlers as functions as props.
Is this correct? It will provide clean responsibility managing models, but it gives performance doubts using http requests to load each component's content
You can avoid performance issues by not allowing each component to directly contact the microservices. If your higher order component compiles all the information needed and make as little as possible HTTP calls, you should be perfectly fine with this approach.
It's generally recommended to use Flux/Redux, but if you opt out, this is how to go about it.
According to: https://facebook.github.io/flux/docs/overview.html#content
Occasionally we may need to add additional controller-views deeper in the
hierarchy to keep components simple. This might help us to better encapsulate a
section of the hierarchy related to a specific data domain.
And this is what I am thinking about responsibility of particulary component's domain (three of them was described). So could it be reliable to make three controller views (or stores) that can access dependent API to manage resource's data?

manipulating other modules functionality in zend framework 2

How can I manipulate other modules without editing them ? very the same thing that wordpress modules do .
They add functionality to core system without changing the core code and they work together like a charm.
I always wanted to know how to implement this in my own modular application
A long time ago I wrote the blog post "Use 3rd party modules in Zend Framework 2" specifically about extending Zend Framework 2 modules. The answer from Bez is technically correct, it could be a bit more specific about the framework.
Read the full post at https://juriansluiman.nl/article/117/use-3rd-party-modules-in-zend-framework-2, but it gives you a clue about:
Changing a route from a module (say, you want to have the url /account/login instead of /user/login)
Overriding a view script, so you can completely modify the page's rendering
Changing a form object, so you could add new form fields or mark some required field as not required anymore.
This is a long topic, but here is a short gist.
Extensibility in Zend Framework 2 heavily relies on the premise that components can be interchanged, added, and/or substituted.
Read up on SOLID principles: http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
Modules typically consists of objects working together as a well-oiled machinery, designed to accomplish one thing or a bunch of related things, whatever that may be. These objects are called services, and managed by the service locator/service manager.
A big part of making your module truly extensible is to expect your developers to extend a class or implement a certain interface, which the developer register as services. You should provide a mode of definition wherein the developers can specify which things he wants to substitute, and/or add their own services to -- and this is where the application configuration comes in.
Given the application configuration, you should construct your machinery a.k.a. module services according to options the developer has specified i.e., use the developer defined Foo\Bar\UserService service as the YourModule\UserServiceInterface within your module, etc. (This is usually delegated to service factories, which has the opportunity to read the application configuration, and constructs the appropriate object given a particular set of configuration values.)
EDIT:
To add, a lot can be accomplished by leveraging Zend's Zend\EventManager component. This allows you to give developers the freedom to hook and listen to certain operations of your module and act accordingly (See: http://en.wikipedia.org/wiki/Observer_pattern)

Javascript client for REST API

I am trying to write a Javascript client for a web application which provides a REST API to interact with the application. I want to do this in a very advanced way like with a proven stack of tools and methodologies available in Javascript.
Most of the guides about javascript client library development I found in the web are application oriented which have a view part( I mean HTML part ). What I needed is like a client library with some methods which can be used to develop web applications. So I don't want to depend this library with any other javascript library like JQuery, Backbone etc.
I have gone through lot of design patterns available in javascript, especially patterns mentioned in Learning JavaScript Design Patterns a book by Addy Osmani. And after all I got confused, I couldn't decide which one to follow.
What I have in mind is like following:
Initialize the library with some key and secret (This can be compared to declaring object for a class in php).
There will be a data persistence unit which will keep the authenticated user's identity over a predefined amount of time like sessions in php. User data will be stored in cookies or local-storage. Also there will be provisions to override the methods of this unit so that user can implement their own storage mechanism. A reference to this unit will also be passed during library initialization.
Keep a global request method which handles all the API requests associated with the library(This can be compared to a method of the main class in php)
Define all the API methods encapsulated into different units according to the area of application it dealing with. This each unit will have a constructor method which defines some default properties to the unit( This can be compared to defining models in php which will fetch or save data from the application with API ). Each of this unit can be inherited from a super unit which provides some default properties and methods.
After reading some blogs and articles I have decided to use yeoman for library development. May be I can use some yeoman community javascript library generators to start the development.
As I described above I think what I needed is like: a class which keeps a single instance throughout the application which can be used to refer all the models and functions in the models. For this may be I can go with the singleton module pattern, but I am not fully sure about how to make use of it to my requirement.
Any advice is greatly appreciated!

Implementing Chain of Responsibility with Services

I'm thinking about a platform-neutral (i.e. not .NET MEF) technique of implementing chain-of-responsibliity pattern using web services as the handlers. I want to be able to add more CoR handlers by deploying new services and not compiling new CoR code, just change configuration info. It seems the challenge will be managing the metadata about available handlers and ensuring the handlers are conforming to the interface.
My question: any ideas on how I can safely ensure:
1. The web services are implementing the interface
2. The web services are implementing the base class behavior, like calling the successor
Because, in compiled code, I can have type-safety and therefore know that any handlers have derived from the abstract base class that ensures the interface and behavior I want. That seems to be missing in the world of services.
This seems like a valid question, but a rather simple one.
You are still afforded the protection of the typing system, even if you are loading code later, at runtime, that the original code never saw before.
I would think the preferred approach here would be to have something like a properties file with a list of implementers (your chain). Then in the code, you are going to have to have a way to instantiate an instance of each handler at runtime to construct the chain. When you construct the instance, you will have to check its type. In Java, for instance, that would take the form of instanceof (abomination ordinarily, but you get a pass for loading scenarios), or isAssignableFrom. In Objective C, it's conformsToProtocol.
If it doesn't, it can't be used and you can spit an error out to the console.

When should I use RequestFactory vs GWT-RPC?

I am trying to figure out if I should migrate my gwt-rpc calls to the new GWT2.1 RequestFactory cals.
Google documentation vaguely mentions that RequestFactory is a better client-server communication method for "data-oriented services"
What I can distill from the documentation is that there is a new Proxy class that simplifies the communication (you don't pass back and forth the actual entity but just the proxy, so it is lighter weight and easier to manage)
Is that the whole point or am I missing something else in the big picture?
The big difference between GWT RPC and RequestFactory is that the RPC system is "RPC-by-concrete-type" while RequestFactory is "RPC-by-interface".
RPC is more convenient to get started with, because you write fewer lines of code and use the same class on both the client and the server. You might create a Person class with a bunch of getters and setters and maybe some simple business logic for further slicing-and-dicing of the data in the Person object. This works quite well until you wind up wanting to have server-specific, non-GWT-compatible, code inside your class. Because the RPC system is based on having the same concrete type on both the client and the server, you can hit a complexity wall based on the capabilities of your GWT client.
To get around the use of incompatible code, many users wind up creating a peer PersonDTO that shadows the real Person object used on the server. The PersonDTO just has a subset of the getters and setters of the server-side, "domain", Person object. Now you have to write code that marshalls data between the Person and PersonDTO object and all other object types that you want to pass to the client.
RequestFactory starts off by assuming that your domain objects aren't going to be GWT-compatible. You simply declare the properties that should be read and written by the client code in a Proxy interface, and the RequestFactory server components take care of marshaling the data and invoking your service methods. For applications that have a well-defined concept of "Entities" or "Objects with identity and version", the EntityProxy type is used to expose the persistent identity semantics of your data to the client code. Simple objects are mapped using the ValueProxy type.
With RequestFactory, you pay an up-front startup cost to accommodate more complicated systems than GWT RPC easily supports. RequestFactory's ServiceLayer provides significantly more hooks to customize its behavior by adding ServiceLayerDecorator instances.
I went through a transition from RPC to RF. First I have to say my experience is limited in that, I used as many EntityProxies as 0.
Advantages of GWT RPC:
It's very easy to set-up, understand and to LEARN!
Same class-based objects are used on the client and on the server.
This approach saves tons of code.
Ideal, when the same model objects (and POJOS) are used on either client and server, POJOs == MODEL OBJECTs == DTOs
Easy to move stuff from the server to client.
Easy to share implementation of common logic between client and server (this can turn out as a critical disadvantage when you need a different logic).
Disadvatages of GWT RPC:
Impossible to have different implementation of some methods for server and client, e.g. you might need to use different logging framework on client and server, or different equals method.
REALLY BAD implementation that is not further extensible: most of the server functionality is implemented as static methods on a RPC class. THAT REALLY SUCKS.
e.g. It is impossible to add server-side errors obfuscation
Some security XSS concerns that are not quite elegantly solvable, see docs (I am not sure whether this is more elegant for RequestFactory)
Disadvantages of RequestFactory:
REALLY HARD to understand from the official doc, what's the merit of it! It starts right at completely misleading term PROXIES - these are actually DTOs of RF that are created by RF automatically. Proxies are defined by interfaces, e.g. #ProxyFor(Journal.class). IDE checks if there exists corresponding methods on Journal. So much for the mapping.
RF will not do much for you in terms of commonalities of client and server because
On the client you need to convert "PROXIES" to your client domain objects and vice-versa. This is completely ridiculous. It could be done in few lines of code declaratively, but there's NO SUPPORT FOR THAT! If only we could map our domain objects to proxies more elegantly, something like JavaScript method JSON.stringify(..,,) is MISSING in RF toolbox.
Don't forget you are also responsible for setting transferable properties of your domain objects to proxies, and so on recursively.
POOR ERROR HANDLING on the server and - Stack-traces are omitted by default on the server and you re getting empty useless exceptions on the client. Even when I set custom error handler, I was not able to get to low-level stack traces! Terrible.
Some minor bugs in IDE support and elsewhere. I filed two bug requests that were accepted. Not an Einstein was needed to figure out that those were actually bugs.
DOCUMENTATION SUCKS. As I mentioned proxies should be better explained, the term is MISLEADING. For the basic common problems, that I was solving, DOCS IS USELESS. Another example of misunderstanding from the DOC is connection of JPA annotations to RF. It looks from the succinct docs that they kinda play together, and yes, there is a corresponding question on StackOverflow. I recommend to forget any JPA 'connection' before understanding RF.
Advantages of RequestFactory
Excellent forum support.
IDE support is pretty good (but is not an advantage in contrast with RPC)
Flexibility of your client and server implementation (loose coupling)
Fancy stuff, connected to EntityProxies, beyond simple DTOs - caching, partial updates, very useful for mobile.
You can use ValueProxies as the simplest replacement for DTOs (but you have to do all not so fancy conversions yourself).
Support for Bean Validations JSR-303.
Considering other disadvantages of GWT in general:
Impossible to run integration tests (GWT client code + remote server) with provided JUnit support <= all JSNI has to be mocked (e.g. localStorage), SOP is an issue.
No support for testing setup - headless browser + remote server <= no simple headless testing for GWT, SOP.
Yes, it is possible to run selenium integration tests (but that's not what I want)
JSNI is very powerful, but at those shiny talks they give at conferences they do not talk much about that writing JSNI codes has some also some rules. Again, figuring out how to write a simple callback was a task worth of true researcher.
In summary, transition from GWT RPC to RequestFactory is far from WIN-WIN situation,
when RPC mostly fits your needs. You end up writing tons conversions from client domain objects to proxies and vice-versa. But you get some flexibility and robustness of your solution. And support on the forum is excellent, on Saturday as well!
Considering all advantages and disadvantages I just mentioned, it pays really well to think in advance whether any of these approaches actually brings improvement to your solution and to your development set-up without big trade-offs.
I find the idea of creating Proxy classes for all my entities quite annoying. My Hibernate/JPA pojos are auto-generated from the database model. Why do I now need to create a second mirror of those for RPC? We have a nice "estivation" framework that takes care of "de-hibernating" the pojos.
Also, the idea of defining service interfaces that don't quite implement the server side service as a java contract but do implement the methods - sounds very J2EE 1.x/2.x to me.
Unlike RequestFactory which has poor error handling and testing capabilities (since it processes most of the stuff under the hood of GWT), RPC allows you to use a more service oriented approach. RequestFactory implements a more modern dependency injection styled approach that can provide a useful approach if you need to invoke complex polymorphic data structures. When using RPC your data structures will need to be more flat, as this will allow your marshaling utilities to translate between your json/xml and java models. Using RPC also allows you to implement more robust architecture, as quoted from the gwt dev section on Google's website.
"Simple Client/Server Deployment
The first and most straightforward way to think of service definitions is to treat them as your application's entire back end. From this perspective, client-side code is your "front end" and all service code that runs on the server is "back end." If you take this approach, your service implementations would tend to be more general-purpose APIs that are not tightly coupled to one specific application. Your service definitions would likely directly access databases through JDBC or Hibernate or even files in the server's file system. For many applications, this view is appropriate, and it can be very efficient because it reduces the number of tiers.
Multi-Tier Deployment
In more complex, multi-tiered architectures, your GWT service definitions could simply be lightweight gateways that call through to back-end server environments such as J2EE servers. From this perspective, your services can be viewed as the "server half" of your application's user interface. Instead of being general-purpose, services are created for the specific needs of your user interface. Your services become the "front end" to the "back end" classes that are written by stitching together calls to a more general-purpose back-end layer of services, implemented, for example, as a cluster of J2EE servers. This kind of architecture is appropriate if you require your back-end services to run on a physically separate computer from your HTTP server."
Also note that setting up a single RequestFactory service requires creating around 6 or so java classes where as RPC only requires 3. More code == more errors and complexity in my book.
RequestFactory also has a little bit more overhead during the request processing, as it has to marshal serialization between the data proxies and actual java models. This added interface adds extra processing cycles which can really add up in an enterprise or production environment.
I also do not believe that RequestFactory services are serialization like RPC services.
All in all after using both for some time now, i always go with RPC as its more lightweight, easier to test and debug, and faster then using a RequestFactory. Although RequestFactory might be more elegant and extensible then its RPC counter part. The added complexity does not make it a better tool necessary.
My opinion is that the best architecture is to use two web apps , one client and one server. The server is a simple lightweight generic java webapp that uses the servlet.jar library. The client is GWT. You make RESTful request via GWT-RPC into the server side of the client web application. The server side of the client is just a pass though to apache http client which uses a persistant tunnel into the request handler you have running as a single servlet in your server servlet web application. The servlet web application should contain your database application layer (hibernate, cayenne, sql etc..) This allows you to fully divorce the database object models from the actual client providing a much more extensible and robust way to develop and unit test your application. Granted it requires a tad bit of initial setup time, but in the end allows you to create a dynamic request factory sitting outside of GWT. This allows you to leverage the best of both worlds. Not to mention being able to test and make changes to your server side without having to have the gwt client compiled or build.
I think it's really helpful if you have a heavy pojo on the client side, for example if you use Hibernate or JPA entities.
We adopted another solution, using a Django style persistence framework with very light entities.
The only caveat I would put in is that RequestFactory uses the binary data transport (deRPC maybe?) and not the normal GWT-RPC.
This only matters if you are doing heavy testing with SyncProxy, Jmeter, Fiddler, or any similar tool that can read/evaluate the contents of the HTTP request/response (like GWT-RPC), but would be more challenging with deRPC or RequestFactory.
We have have a very large implementation of GWT-RPC in our project.
Actually we have 50 Service interfaces with many methods each, and we have problems with the size of TypeSerializers generated by the compiler that turns our JS code huge.
So we are analizing to move towards RequestFactory.
I have been read for a couple of days digging into the web and trying to find what other people are doing.
The most important drawback I saw, and maybe I could be wrong, is that with RequestFactory your are no longer in control of the communication between your Server Domain objects and your client ones.
What we need is apply the load / save pattern in a controlled way. I mean, for example client receive the whole object graph of objects belonging to a specific transaction, do his updates and them send the whole back to the server. The server will be responsible for doing validation, compare old with new values and do persistance. If 2 users from different sites gets the same transaction and do some updates, the resulting transaction shouldn't be the merged one. One of the updates should fail in my scenario.
I don't see that RequestFactory helps supporting this kind of processing.
Regards
Daniel
Is it fair to say that when considering a limited MIS application, say with 10-20 CRUD'able business objects, and each with ~1-10 properties, that really it's down to personal preference which route to go with?
If so, then perhaps projecting how your application is going to scale could be the key in choosing your route GWT RPC or RequestFactory:
My application is expected to stay with that relatively limited number of entities but will massively increase in terms of their numbers. 10-20 objects * 100,000 records.
My application is going to increase significantly in the breadth of entities but the relative numbers involved of each will remain low. 5000 objects * 100 records.
My application is expected to stay with that relatively limited number of entities AND will stay in relatively low numbers of e.g. 10-20 objects * 100 records
In my case, I'm at the very starting point of trying to make this decision. Further complicated by having to change UI client side architecture as well as making the transport choice. My previous (significantly) large scale GWT UI used the Hmvc4Gwt library, which has been superseded by the GWT MVP facilities.