Access Record record - show lock status - forms

I use a datasheet view of a query with aggregate sub queries attached as fields. Of course this is not editable and that is fine as its merely an overview listing of all the records along with some sum information from related tables. I have noticed that when a query is not editable the record selector lock information is not displayed. This made me wonder.
Is there is some event that can be captured to display in more or less real time when a record is locked or released by other users?
Alternatively is there any other way to display in my overview list or elsewhere what records are currently locked and if possible by what user?
Access 2010(x64)

For an updatable query, the locked status may be displayed on the left margin as you have noted. But that reflects record-locking by the query engine, not the same thing as whether a data result is updateable under normal circumstances.
For a read-only query, Access won't show a lock icon because in that context it isn't useful information (from most people's point of view).
You could use VBA to check the attribute of the query as a whole, and display a notification when the form is loaded. But that doesn't relate to the record-locking icon.
Is there is some event that can be captured to display in more or less real time when a record is locked or released by other users? -- I believe the simple answer is no.

Access 2007 saw the end of the JET Security model, so there is no way for you to manage user-level security in files created using 2007 or later.
The only alternative would be to use the Win API to register users by their NT ids, and to develop your own model which responded to activity. Clearly this would be no mean feat!
[Edit]
As for detecting record locks, it's possible you could implement this using an events handler class together with the ADO library:
http://msdn.microsoft.com/en-gb/library/windows/desktop/ms678373%28v=vs.85%29.aspx
If you don't mind getting your hands dirty with Class Modules (something some pundits never got to grips with), then you can find a lead-in here.

Related

SAPUI5 multiple users working on one table entry

I'm currently developing an application in the SAP BTP for multiple users. In the application you have one table where all responsibilities of a specific task are written down. These responsibilities may overlap between the users, which means that for one responsibility multiple users are mentioned.
In the application the users should click on either accept or reject if they still are responsible for this task. After they have given their feedback, they can click on a save button to write everything via a batch submit to the hana db. If they are not responsible anymore their name should be removed from the tasks and they should not see this task anymore.
The problem I am facing is that currently everything is stored in one database table and if one user gives feedback to some entries while another user works on the same entries, the user who saves his entries last will override the first one.
I have tried searching for a delta insert into the database or to live update after each user input or to lock the data when another user is currently working. But none of these seem to work fine, because users would still be able to override each others entries or they may lock some entries forever.
My question therefore is, what is the usual approach to manage multiple user inputs on a single table or is using a single table a bad practise at first?
My second question would be if sapui5 supports this approach or if I can handle this in another way?
You need to do server-side validation, before the save action.
UI5 does not support this directly, you can handle it by yourself.
Because we are stateless with ui5 / data you could use the draft concept
https://experience.sap.com/fiori-design-web/draft-handling/
Or something like already said backend logic with checks before safe.

CQRS can query be source of event?

Usually when talk implementing CQRS it is supposed that commands are sorces for events. But can queries made by user be source of created events in event store? Or such actions (when we need an event that reflects query) should be implemented using command still?
But can queries made by user be source of created events in event store?
Go not to the elves for counsel, for they will answer both no and yes.
So, the "no" part: queries are distinguished by the fact that they don't change the domain model. In a CQRS implementation, the queries are being served by the read model, which may not even have access to your event store at all.
when we need an event that reflects query
The yes part: there's no law that says you can't assemble a history of queries, and stick that in your event store.
But I'm stumped, in that I don't see a clear case where the domain needs an event that reflects a query. That's really weird. My guess would be that needing an event that reflects a query is a hint that your model is broken.
You may be able to make some progress with this by exploring the source of the requirement.
If the requirement is coming from operations, analytics, reporting, usability... then the domain model probably isn't the right place for that information.
If the requirement is coming from your domain experts ("we need to capture these queries so that the model supports the right changes later"), then you should be looking to identify what entity is responsible for tracking that the query happened, and sending an appropriate command to that entity.

how to implement number of views of a particular page

So basically I want to implement the same functionality as StackOverflow's:
viewed 59344 times
So here is some background information:
I want to count only unique visits. The assumption that registered users will read the article many times (it is evolving)
I use MongoDB as a store
I would like it to be close to real-time
My system will have a registration, but I want to count the views of anonymous users as well
I understand that the best way to count unique visits is through registration, but the thing is that a big chunk of users will be just passive readers who do not need to create an account to read the information from the application. As far as I understand, the most convenient way is to save the IP address of every user, who reads the post. I also understand that IP addresses will not provide uniqueness (some different users will have the same IP, because they are behind the same ISP and one user can have different IPs, by using proxies, tor, etc)
The use of Mongo is not absolutely essential, just the thing is that everything is written in Mongo right now, so I will switch only if it will be much faster/convenient.
Background
Are you certain you need to track "unique" views?
I actually wouldn't expect popular sites to try to keep the view counts unique - bigger is better and re-visits for new comments are still additional "views" in the the sense of showing new content/comments/ads. There are other possible subtleties to "correctness" that may or may not be important for your use case, such as excluding crawlers or your own company's users/IPs.
Instead of spending time tracking unique views (which isn't overly meaningful), I would look at counting unique user interactions such as voting/liking/commenting on the page. You can then determine "popularity" of a page with some formula based on those metrics. There is an interesting example of this approach in the Radioactivity module for Drupal, where a "hotness" metric is calculated based on activity based on recency of user interactions.
Approaches to consider
1) For a simple view counter in MongoDB, I would just use $inc to bump up the view count when the page is loaded. You can exclude logging users by role as needed (for example admin users).
2) For a more accurate view counter I would pass off the problem to a web analytics platform (which you should be using with your site for more detailed analysis anyway). For example, you can use Google Analytics API or an open source application like Piwik. Web analytics systems already have solutions in place for determining unique users/views, and the API calls for these can be asynchronous via JavaScript.
3) If implementing your own unique view tracking a definite requirement, I would use a separate collection for tracking views and upsert based on your uniqueness criteria (unique view per user,article pair for registered users or session_id,article pair for anon users). I would combine this with approach #1 (incrementing a view counter for the article views) by incrementing a counter of article views if the upsert results in an insert.
One of the way that you can solve the problem is using the cookies , once a user has visited the page , you can have one cookie added saying that he is already visited the page and you do not need to count him again. You can keep on appending some key to know what all pages he had visited. I know cookies can be deleted but in any solution there will be tradeoff.
From the mongoDB prospective , if you want very fast insert and read , i would suggest couple of things you can do.
1) As you create a article , create a document like this in your may be log collection
{"_id" : "Article URL" , {"Hit" : 0}}
Why i am not suggesting to add IP address or any other information because , as you will add IP addresses , the size of the document going to change mongoDB need to find new allocated space. Which is bad from performance angle. As you are only incrementing the counter it will not increase the size of the document and it will no need to change it place. + You have limitation on the maximum size of the document you can have.
2) Creating document in advance will give direct update statement and no worry to check for the existence of the document for the article Id or not.

Trying to prevent multiple database calls with a very large call

So we run a downline report. That gathers everyone in the downline of the person who is logged in. Some people of clients run this with no problem as it returns less than 100 records.
Some people of clients however returns 4,000 - 6,000 rows which comes out to be about 8 MB worth of information. I actually had to up my buffer limit on my development machine to handle the large request.
What are some of the best ways to store this large piece of data and help prevent it from being run multiple times consecutively?
Can it be stored in a cookie?
Session is out of the question as this would eat up way to much memory on the server.
I'm open to pretty much anything at this point, trying to better streamline the old process into a much quicker efficient one.
Right now what is done, is it loads the entire recordset, it loops through the recordset building out the data into return_value cells.
Would this be better to turn into a jquery/ajax call?
The only main requirements are:
classic asp
jquery/javascript
T-SQL
Why not change the report to be paged? Phase 1: run the entire query, but the page only displays the right set of rows based on selected page. Now your response buffer problem is fixed. Phase 2: move the paging into the query using Row_Number(), now your database usage problem is fixed. Phase 3: offer the user an option of "display to screen" (using above) or "export to csv" where you can most likely export all the data, since csv is nice and compact.
Using a cookie seems unwise, given the responses to the question What is the maximum size of a web browser's cookie's key?.
I would suggest using ASP to create a file on the Web server and writing the data to that file. When the user requests the report, you can then determine if "enough time" has passed for it to be worth running the report again, or if the cached version is sufficient. User's login details could presumably be used for naming the file, or the Session.SessionID, or you could store something new in the user's session. Advantage of using their login would be that your cache of the report can exist longer than a user's session.
Taking Brian's Answer further, query page count, which would be records returned / items per page rounded up. Then join the results of every page query on client side. Pages start at a offset provided through the query. Now you have the full amount on the client without overflowing your buffer. And it can be tailored to an interface and user option (display x per page).

New/Read Flags in CQRS

I am currently drafting a concept for a (mostly) HTML-based collaboration suite which I plan to implement using CQRS. This software will contain messages that can be sent to the user (which can either be read or unread, obviously) and other elements which shall be marked "new" if they were created after the last user login.
Hardly something new, but I am not quite sure how that would be correctly implemented using CQRS. As I understand it, Change of any kind should, without exception, only be possible via Commands. But creating commands for every single (new) element that is being accessed seems a bit too much, not to mention the overhead.
I don't know if I need it, but what would be the best way to implement a Last-Accessed Timestamp on elements. Basically the same problem like the above, with the difference that the change happens EVERY time the element is accessed, not only the first time for each user.
CQRS seems to be an awesome concept but it really needs more learning material. Can't wait till a book is released :)
Regards
[Edit] No one? Wouldn't have thought that this is such a complicated issue..
I assume you're using event-sourcing in which case once you allow your query-service/event-handlers to raise appropriate events then this becomes fairly easy to solve.
For your messages/elements; when handling the specific creation events of your elements either add to existing or create additional event-handlers, to store to a messages read-model with a status of new and appropriate information about the element.
As part of you're user login I don't see why you can't raise a user-logged-in event (from the security/query service depending on how your implementing authentication) to say the user has logged in. An event-handler could capture this and write the last-login timestamp to a specific user-last-login read-model.
In addition the user-logged-in event-handler would need to update all the new messages (for that user) to an unread status. Seeing as we're changing the status of the messages as the user logs in do you still need to store the last-login timestamp?
For your last-accessed timestamp, perhaps you could just work this into your query service as queries for your different elements complete. Raise a query-completed event with element id/type information.