Intersystems-cache log4 equivalent - intersystems-cache

Does anyone know if there is something similar to log4 available for cache?
I've used log4net on a number of .Net projects and would like to have something with the same capabilities. In particular the logging level and the configuration ability between local,dev,qa,and prod environments.

It's hard to answer definitively that something doesn't exist, but I'm pretty sure it doesn't exist. The only sources I've been able to find for Cache libraries is either Intersystems, of course, or the M/Gateway website (I am not affiliated with the M/Gateway product or website, nor am I endorsing it, it's just the only Cache code repository I've been able to find. It requires registration and has under 50 files, many of them tutorials).
Searching GitHub for "Intersystems" does find some stuff, but "Intersystems logging", though there are some results in the search, none appear to be anything like log4net.
The base Intersystems libraries include things like ^%ETN, which traps and logs error data, but it just puts some specific stuff in globals, it's nothing like log4net.
So, in summary, I wouldn't hold my breath.
You might consider using log4net a basis for rolling your own simplified version. A Cache way to implement it might be to inherit from a logging class to get the Log method. You could get class specific data either by run-time reflection or by having the Log base class include a generator method to fetch the class specific data (more efficient). In Cache a configuration file probably isn't that useful, so I would suggest persistent configuration classes. I would have design it to allow multiple configurations to be saved to disk at once, and some method of designating one as active.

Related

Rest API design: Managing access to sub-entities

Note: I realize that this is close to being off-topic for being opinion-based, but I am hoping that there is some accepted best practice to handle this that I just don't know about.
My problem is the following: I need to design a Rest API for a program where users can create their own projects, and each project contains files that can only be seen by users that have access. I am stuck with how to design the "List all files of a project" query.
Standard Rest API practice would suggest two endpoints, like:
`GET /projects` # List all projects
`POST /projects` # Create new project
`GET /projects/id` # Get specific project
etc.
and the same for the files of a project.
However, there should never be a reason to list all files - only the files of a single project. To make it more complicated, access management needs to be a thing, users should never see files that are in projects they don't have access to.
I can see multiple ways to handle that:
So the obvious way is to implement the GET function, optionally with a filter. However, this isn't optimal, since if the user doesn't set a filter, it would have to crawl through all projects, check for each project whether the user has access, and then list all files the user has access to:
GET /files?project=test1
I could also make the files command a subcommand of the projects command - e.g.
GET /projects/#id/files
However, I have the feeling this isn't too restful, since it doesn't expose entities directly?
Is there any concencus on which should usually be implemented? Is it okay to "force" users to set a parameter in the first one? Or is there a third alternative that solves what I am looking for? Happy about any literature recommendations on how to design this as well.
Standard Rest API practice would suggest two endpoints
No, it wouldn't. REST practice would suggest figuring out the resources in your resource model.
Think "documents": I should be able to retrieve (GET) a document that describes all of the files in the project. Great! This document should only be accessible when the request authorization matches some access control list. Also good.
Maybe there should also be a document for each user, so they can see a list of all of the projects they have access to, where that document includes links to the "all of the files in the project" documents. And of course that document should also be subject to access control.
Note that "documents" here might be text, or media files, or scripts, or CSS, or pretty much any kind of information that you can transmit over the network. We can gloss the details, because "uniform interface" means that we manage them all the same way.
In other words, we're just designing a "web site" filled with interlinked documents, with access control.
Each document is going to need a unique identifier. That identifier can be anything we want: /5393d5b0-0517-4c13-a821-c6578cb97668 is fine. Because it can be anything we want, we have extra degrees of freedom.
For example, we might design our identifiers such that the document whose identifiers begin with /users/12345 are only accessible by requests with authorization headers that match user 12345, and that all documents whose identifiers begin with /projects/12345 are only accessible by requests with authorization headers that match any of the users that have access to that specific project, and so on.
In other words, it is completely acceptable to choose resource identifier spellings that make your implementation easier.
(Note: in an ideal world, you would have "cool" identifiers that are implementation agnostic, so that they still work even if you change the underlying implementation details of your server.)
I have the feeling this isn't too restful, since it doesn't expose entities directly?
It's fine. Resource models and entity models are different things; we shouldn't expect them to always match one to one.
After looking further, I came across this document from Microsoft. Some quotes:
Also consider the relationships between different types of resources and how you might expose these associations. For example, the /customers/5/orders might represent all of the orders for customer 5. You could also go in the other direction, and represent the association from an order back to a customer with a URI such as /orders/99/customer. However, extending this model too far can become cumbersome to implement. A better solution is to provide navigable links to associated resources in the body of the HTTP response message. This mechanism is described in more detail in the section Use HATEOAS to enable navigation to related resources.
In more complex systems, it can be tempting to provide URIs that enable a client to navigate through several levels of relationships, such as /customers/1/orders/99/products. However, this level of complexity can be difficult to maintain and is inflexible if the relationships between resources change in the future. Instead, try to keep URIs relatively simple. Once an application has a reference to a resource, it should be possible to use this reference to find items related to that resource. The preceding query can be replaced with the URI /customers/1/orders to find all the orders for customer 1, and then /orders/99/products to find the products in this order.
This makes me think that using solution 2 is probably the best case for me, since each file will be associated with only a single project, and should be deleted when a project is deleted. Files cannot exist on their own, outside of projects.

can an ansible plugin call a role or a playbook?

I have a big role i am looking to pack in a collection, and i want to create a plugin that does calls the role, instead of doing "include_role". so, i am looking for my customers to be able to put something like:
- name: call my plugin
my_plugin:
param1: "mmm"
param2: 42
and that will, internally, run some verifications, but eventually it will call on the entire role to execute (contrary to just calling a module).
all the documentation i found seem to call modules from the plugin, but nothing seems to be able to call a role (or a playbook). is there a way to achieve that?
If you really want to go this way, you can look into include_role source code. (f.e. here https://github.com/ansible/ansible/blob/2cbfd1e350cbe1ca195d33306b5a9628667ddda8/lib/ansible/playbook/role_include.py). It doesn't look to me like a normal pugin, though.
But there is a more serious issue here: If you create something like that, it would totally obscure the role content from the user. Roles aren't 'python modules', and isolation is very weak for roles. By hiding role content (and execution) from users, you create, basically, your own version of ansible, with fresh and unknown list of bugs and quirks.
If you want to control the way code is executed, the strategy plugin may be a more reasonable place. You still allow users to see the usual execution workflow, but you'll have a great deal of control on how things are executed.
But writing strategy plugins is crazy hard. I know only one third party strategy plugin (mitogen).

Keeping track of changed properties in JPA

Currently, I'm working on a Java EE project with some non-trivial requirements regarding persistence management. Changes to entities by users first need to be applied to some working copy before being validated, after which they are applied to the "live data". Any changes on that live data also need to have some record of them, to allow auditing.
The entities are managed via JPA, and Hibernate will be used as provider. That is a given, so we don't shy away from Hibernate-specific stuff. For the first requirement, two persistence units are used. One maps the entities to the "live data" tables, the other to the "working copy" tables. For the second requirement, we're going to use Hibernate Envers, a good fit for our use-case.
So far so good. Now, when users view the data on the (web-based) front-end, it would be very useful to be able to indicate which fields were changed in the working copy compared to the live data. A different colour would suffice. For this, we need some way of knowing which properties were altered. My question is, what would be a good way to go about this?
Using the JavaBeans API, a PropertyChangeListener could suffice to be notified of any changes in an entity of the working copy and keep a set of them. But the set would also need to be persisted, since the application could be restarted and changes can be long-lived before they're validated and applied to the live data. And applying the changes on the live data to obtain the working copy every time it is needed isn't feasible (hence the two persistence units).
We could also compare the working copy to the live data and find fields that are different. Some introspection and reflection code would suffice, but again that seems rather processing-intensive, not to mention the live data would need to be fetched.
Maybe I'm missing something simple, or someone know of a wonderful JPA/Hibernate feature I can use. Even if I can't avoid making (a) separate database table(s) for storing such information until it is applied to the live data, some best-practices or real-life experience with this scenario could be very useful.
I realize it's a semi-open question but surely other people must have encountered a requirement like this. Any good suggestion is appreciated, and any pointer to a ready-made solution would be a good candidate as accepted answer.
Maybe you can use the Hibernate flush entity event listener. The dirty properties are calculated before the flush. You can store them somewhere in your database.
A sample code of using the dirty properties feature of Hibernate which may give you an idea.

How to make an external requirement internal

I'm using version 8.0.858 of Enterprise Architect and I am hoping someone knows how to make an external requirement internal again.
I have searched thru the EA user guide, and this tells me how to make an internal requirement external, but is silent on how to reverse the process.
I have hundreds of requirements linked to Use Cases where the requirement is marked as external, but they shouldn't be as they each only relate to one Use Case.
Here's an example of what I'm talking about
This makes it difficult to get an overview of what the Use Case requires because when you click on an external requirement, the description does not show up in the textbox, and you have to double-click it to open in a separate window.
My only thought is to hack the database in Access, but I'd rather not if there is any UI functionality. That said, if you have know how to edit the database directly to achieve my goal, then that would be a valid solution too.
To my knowledge this isn't possible, for the reason #observer notes. External requirements are model elements in their own right and thus have far more information associated with them than internal requirements do.
External requirements (and other model elements) are stored in the t_objects table, while internal requirements are in t_objectrequires. Connectors are in t_connector.
I'd advise against trying to hack the database directly. Use the automation interface instead (it can be accessed from an in-EA script); look at the Element and ElementRequirement classes.

How do we share data between two different services

I am currently working on a web service which is periodically polled. It does not store its state and is instantiated everytime it is queried. Essentially, it retrieves the state of other external entities e.g. databases and delivers it back to the requester.
Recently, the need to store state as arisen in that
There is the need to continously collect data from a particular source and store the bits that are important/relevant
There is the need to collect the aggregate of a particular data source over a period of time
I came up with the following idea:
My main concern here is the fact that I am using a static class (essentially a global) to share data between the two services. Is there a better way to doing this?
edit: Thanks for the responses thus far. Apologies for the vaguesness of this question: just trying to work out what is the best way to share data across different services and am unsure as to the specifics (i.e. what is required). The platform that I am developing on is the .NET framework and both services are simply WCF services hosted as a Windows service.
The database route sounds like the most conventional way to go - however I am reluctant to go down that path for now (mainly for deployment/setup issues; it introduces the need to create new tables, etc in addition to simply installing the software) for at this point the transfer of relatively small amounts of data. This may of course change in the future and going the database route might be the way to go at that point.
Is there any other way besides adding a database persistance layer?
If you need to collect and aggregate data, you might want to consider using a database between the two layers. Or have I misunderstood something?
You should consider enhancing your question with more requirements: pretty much all options are open here.
Sure - how about data binding? I don't have a lot of information to go on here - about your platform but most sufficiently advanced systems offer it in some form.
You could replace your static shared data with some database representation, with a caching layer (like memcached) between the database and the webservice, so that most of the time the data is available very quickly from the cache, but can be retrieved from the database as needed.
I appreciate that you want to keep the architecture simple. Depending on the magnitude of items you have to look up and there permanency, you might just consider leveraging your file system or a message queue. It sounds like you want a file system, because that sounds the least amount of impact to your design.
If you start dealing with tens of thousands of small files, your directories could get hard to navigate and slow to do file lookups on. I typically shoot for about 1000 - 10000 files per directory, and concoct a routine that can generate a path to the file depending on the file name pattern. Keeping the number of subdirectories even is important, some file systems have a limit on the number of subdirectories in a parent directory.