Does MarkLogic DLS offer a similar file versioning experience to subversion?
Under Subversion, once the file(document) has been locked, others could not update it anymore, unless the file has been committed (check-in) or released the lock.
However in MarkLogic Library Services (DLS), once the document has been checked out, others could still call dls:document-checkout-update-checkin to update and release the lock. Does it mean it is the developer who should use those dls functions to implement the file lock and unlock mechanism?
I tried to use the timeout parameter in dls:doucment-checkout. However, it seems the document will remain in the checkout status forever. But I do see that parameter when I call 'dls:coument-checkout-status'.
Does it mean that it is the developer who should check the server timestamp together with the initial checkout timestamp and timeout duration to determine whether the file is still in lock status?
If so, I will need to write some XQuery programs and set up a scheduled task in ML to clean up the file checkout daily. Is my above understanding correct?
Per https://docs.marklogic.com/guide/app-dev/dls#id_56448, I believe the timeout is not enforced automatically - i.e. there's no background process in MarkLogic that is periodically inspecting documents to see if they should be automatically checked back in or un-checked out. The timeout appears to be meant to be used by a developer to apply their own logic with it - e.g. allowing a UI to state that "Jane checked this document out and only intended to keep it for 10 minutes, but that was 2 hours ago - would you like to break her checkout?"
Related
Two days ago, we started presenting some issues with our cadence setup.
The first thing we noticed is the Open workflows were not disappearing from the list once they completed. For example this workflow appears as Open in the list:
But when you click on it, you will see that it’s actually completed:
At the same time this started to happen, we noticed how several workflows would take quite a long time to complete, several of them would stuck in “Schedule” states and never go further from there. After checking the logs, the only error we saw was this:
{"level":"error","ts":"2021-03-06T19:12:04.865Z","msg":"Persistent store operation failure","service":"cadence-matching","component":"matching-engine","wf-task-list-name":"cadence-sys-history-scanner-tasklist-0","wf-task-list-type":1,"store-operation":"create-task","error":"InternalServiceError{Message: CreateTasks operation failed. Error : Request on table cadence.tasks with ttl of 630720000 seconds exceeds maximum supported expiration date of 2038-01-19T03:14:06+00:00. In order to avoid this use a lower TTL, change the expiration date overflow policy or upgrade to a version where this limitation is fixed. See CASSANDRA-14092 for more details.}","wf-task-list-name":"cadence-sys-history-scanner-tasklist-0","wf-task-list-type":1,"number":6300094,"next-number":6300094,"logging-call-at":"taskWriter.go:176","stacktrace":"github.com/uber/cadence/common/log/loggerimpl.(*loggerImpl).Error\n\t/cadence/common/log/loggerimpl/logger.go:134\ngithub.com/uber/cadence/service/matching.(*taskWriter).taskWriterLoop\n\t/cadence/service/matching/taskWriter.go:176"}
Does somebody have an idea of why this is happening?
The first one is because of visibility sampling being enabled by default(to protect default core DB). You can disable it by configure system.enableVisibilitySampling to false.
But when you doing that, it’s better to separate the visibility and default store into different database cluster so that visibility doesn’t bring down the default(core data model) DB.
see more in https://github.com/uber/cadence/issues/3884
The second is a bug fixed in 0.16.0
It should be resolved if you upgrade server.
See https://github.com/uber/cadence/pull/3627
and https://docs.datastax.com/en/dse-trblshoot/doc/troubleshooting/recoveringTtlYear2038Problem.html
I'm testing out Microsoft Sync Framework to try and see if it'll be suitable for a task that I'm working on. One of the things I'd like to be able to do is to have the option to not just send changed files, but instead to send all of the files (for example, if I'm syncing to a client machine for the first time, and so want to send all files).
I can't seem to find an example of this in the documentation, so any advice would be welcome.
if you're synching for the first time, then there is nothing special to configure as it will sync everything.
if you've already synched and want to re-send all files regardless of whether they've changed or not, just delete the metadata file and that should remove all knowledge of what has been synched.
In our project we are following agile practices ( Sprint ). So every day nightly build will be done. We are able to ensure the correctness of build till day before formal build. But unfortunately most of the time people are doing some major check-in at final day.
We wanted to lock some of the highly sensitive elements which would cause more trouble.
We do not want to lock the integration stream itself. We just wanted to lock some files and folders automatically. Is there any way to do it using Cleartool , (or cleartool commands in powershell)
I would not recommend locking the vob or the files:
both options would lock everything (ie any modification in any branch) for all (or most) users.
you need (from the cleartool lock man page) to be the type owner, VOB owner or root to be able to lock the files or a vob: if one of those sensitive files isn't created by you, the lock will fail (and the vob itself has likely been created by an admin)
the maintenance is too cumbersome for files (you need to maintain the list of files you want to lock)
Locking the stream or at least the branch is still your best option.
It is one simple atomic operation target to lock the right environment.
Combined with the -nusers option, you can still authorized some users to do what they need (checkout/checkins)
The OP comments:
Actually I want to prevent all the users from delivering those sensitive files.
If I lock the stream for particular user it will not serve the purpose. It will stop them delivering other files too
The -nuser option lock for all users except a few.
The idea behind the integration stream is that is is not the user who make the deliver, but the stream integration owner who, at his/her own time, makes the deliver. If that stream is locked for everyone but the integrator, he/she can control the deliver
However, that puts the control of those sensitive file on the integrator (again, locking just those file would be a bad idea, and would make sure that any deliver fails, because of those locks)
If you still want them to deliver while being able to control that the build only use a certain version of those files, then I would rather recommend:
not locking the stream
putting a baseline before final day
tweaking your build script in order for it to:
use whatever version found on final day
except for those "sensitive files" where the script would fetch their baselined version (and not the LATEST version found on final day, because said LATEST version might have been changed by some final deliver).
See for instance "Clearcase command to export an element" or
"In ClearCase, how can I view old version of a file in a static view, from the command line?".
The other day a friend suggested to play a web browser game called OGame. If you don't know it I'll tell you what it is:an rts game where you have to build things like mining factories, barracks and so on. The interesting thing that every building has a build time and you can log off while it's building because it will keep going.
Something like this I would believe is managed via dbms. I have my records where I have the end time of a costruction. How do I check when to update a building? Do I need an external application that checks every seconds what record needs to be updated? Is it possible with mysql5 to have an internal scheduler that launches a procedure on this table? And if so, is it a best practice?
I have built a similar game and I stored the construction end times (and other events to be fired) in an events table. I wrote a PHP daemon which regularly checks the events table for expired records and acts on them accordingly.
I couldn't find a way to do it in the database itself (and if I later wanted to migrate to another DB it would need rewriting). A cron'd script may overlap. A daemon can keep track of everything all the time, and output debug information if events are queuing faster than they're being processed. I also added a cron to check periodically that my daemon is still running, otherwise start it.
Creating a daemon in PHP (if you're using PHP)
Hope that helps.
So in one part of our customised Salesforce system, the following happens:
a trigger changes the value of a picklist on a custom object
a Workflow rule detects that change and fires off an email.
Since about the 4th of December though, it seems to have stopped working.
edit: The Debug Logs show that the trigger is firing and changing the value of the picklist, but no Worflow Rules are evaluated.
The workflow rule is pretty simple, so I don't really understand whats preventing it. The details of the rule are:
Operates on a custom object.
Evaluation Criteria: When a record is created, or when a record is edited and did not previously meet the rule criteria
Rule Criteria: ISPICKVAL(Status__c, 'Not Started')
Active: Yes
Immediate Workflow Actions: an email alert
Edit: The Rule does fire if I manually update the object to set the appropriate status. But it isn't firing when a trigger changes the status.
Edit: Did something change on Salesforce around December 4th 2009? That seems to be when this stopped working ...
Any ideas?
If you had said "the trigger does not fire the workflow, even though a manual change via the UI does", I would have responded something like...
Absolutely. That's how it is designed.
Salesforce do not allow anything
automated to invoke anything automated
(ie you cannot start a WF from a trigger or another WF).
Given that you say this stopped working earlier in the month, I am frankly astonished! We wanted to achieve something like this, would have been about 10 months ago, and Salesforce told us it could not be done; they like to keep tight control over processes that could potentially run away and consume large CPU (because of the multi-tenanted nature of the offering), hence the stringent governor limits...
This may have changed recently, of course, we built work-rounds to get round the restriction...
To answer my own question ... I eventually found out what this was.
The Salesforce Spring '09 Workflow Rule and Roll-Up Summary Field Evaluations update was rolled out to all orgs at the start of Dec '09, and changed certain Workflow behaviours.
The update improves the accuracy of
your data and prevents the
reevaluation of workflow rules in the
event of a recursion.
Our particular problem was that we needed Workflow to be evaluated twice on a single object after the initial action - we had a series of changes to a status field that needed to kick off different things. After the Spring '09 update, Workflow is only evaulated once for an action on an object.
So, it did work, but then the platform changed, and it didn't work anymore. Time to write some code.