CQ5 Reverse Replication - aem

If there are 3-4 publish environments. In reverse replication, if user has added a comment in any publish environment say Publish1, will it be reflecting in all other publish environments? Or first it will come to the author and then after approval it will reflect?

No, as you say, the content will first come to the Author instance. The process will be:
User posts comment in Publish1
Comment goes into Publish1 outbox
Author polls all Publish instances to see if new comments are present
Author retrieves new comment from Publish1 and puts it in moderation queue.
When activated, replication agents on Author push content to Publish 1–6.
As far as I remember, the comment isn't truly even published on the Publish1 environment until it goes through this process above, though the user who posted the comment will be able to see it from his session data. If you cleared the session or accessed Publish1 directly through another browser for instance, you wouldn't see the comment.
According to the Adobe documentation (my emphasis):
Features such as comments and forms, allow users to enter information
on a publish instance. For this a type of replication is needed to
return this information to the author environment, from where it is
redistributed to other publish environments. However, due to security
considerations, any traffic from the publish to the author environment
must be strictly controlled.
This is known as reverse replication and functions using an agent in
the publish environment which references the author environment. This
agent places the input into an outbox. This outbox is matched with
replication listeners in the author environment. The listeners poll
the outboxes to collect any input made and then distribute it as
necessary. This ensures that the author environment controls all
traffic.
You should try it out — even without dedicated servers, you can check this by starting up a couple of copies of the Quickstart JAR on different port numbers & giving them multiple replication agents to test replication across them.

Related

How to subscribe a slack channel to an individual github issue

I don't want to subscribe a slack channel to all issues on arbitrary third party repositories, just to the particular issues on which my team/organization is involved (contributing to / impacted by), so the usual github integration command /github subscribe thirdparty/arbitraryrepo issues does not suffice as it would cause a ton of unwanted noise in the channel. (and the existing label filtering would not cut it)
(Update: there is an open feature request for that https://github.com/integrations/slack/issues/1280)
I neither want to forward my personal github subscriptions to the slack channel as there are lots of projects I am individually involved that have nothing to do with my team's work. (e.g. my direct mentions)
Also, subscription should stay despite I leaved the team/company.
A per-issue public RSS/Atom feed would cut it but it does not exist.
Am I missing something obvious?
The only workaround I can think of is adding a channel email integration, registering that address to an organization-shared github account and subscribe it to the individual issues we run into.
But this is quite cumbersome (must keep an alternate github logged-in session while browsing, channel readability could be abused to hijack company account, subscriptions can't be managed from the slack channel...) so any different idea would be very welcome.
As far as I know there is nothing that would suit your need completely at this moment, but have you considered labels?
#1 option You could use filtering based on label which seems to be the closest:
That would notify you about all issues, pulls, commits, releases, deployments that has a given label. Syntax example:
/github subscribe repo-owner/repo-name +label:"team-a"
Pros: a general filter based on a label name, e.g. team-a
Cons:
a single label filter only (AFAIK: multiple labels aren't supported but discussed: https://github.com/integrations/slack/issues/384)
this works only for a label given when issue/PR is created. Not during its existence ("change label" events are not triggered). Known bug (https://github.com/integrations/slack/issues/1594) and feature PR is open https://github.com/integrations/slack/issues/965
Note: a workaround to get info about label change in PR: to convert PR to a draft and back (https://github.com/integrations/slack/issues/965#issuecomment-1330884166)
#2 option Another thing that might be considered is that the account that is used within GitHub Slack app for connecting to Git server will be actually notified also about all its mentions, assignments & reviews! So if you don't use your personal account, you could get generated some related notifications to the whole team / other specific user etc.
EDIT:
Workaround steps to achieve partially what you want (but it can be in some conflict with your Slack / GitHub settings that I’m not aware of):
Use / create a shared account (or other special account) in GitHub
In Slack, use this special GitHub account for connecting Slack GitHub app to GitHub (this enables notifications by default for this account - mentions and assignments).
Then, when this GitHub account is assigned / mentioned, you’d automatically get notified (but again, then it would work not just for issues but hypothetically also in PRs etc.) to a dedicated channel where you'd have your GitHub integration set.
To auto-assign an issue, you'd have to use e.g. GitHub Actions.
Caveat: But as we discuss, there’s no simple way to achieve your goal completely. To at least get closer to something similar to what you describe and require, you’d need to accept some compromises and extra steps for workaround as it’s unfortunately not currently natively supported. 3rd hypothetical way is to create even more complex mechanisms of filtering & redirecting data which would be increase a level of complexity to just introduce but also maintain (unless you'd already have something similar existing in your infrastructure) and I wouldn't recommend it.

CRM 2011 RU13 The workflow cannot be published or unpublished by someone who is not it's owner

I have created and added some workflows to CRM 2011 RU13, through the UI
Through not fault of my own my development environment is completely air gapped from my production environment.
I added these workflows to my solution and exported the solution as managed and given the solution to the production admin.
When he deploys it fails with this message.
The workflow cannot be published or unpublished by someone who is not it's owner
How do I fix this. There is no way to not give workflows an owner. or say that the owner is the solution.
The production admin gets that message because he is not the owner (inside the target CRM environment) of one or more active workflows included in your solution.
This happens in these situations:
First time you give your solution to be imported, is USER_A to
perform the operation and all the workflows are assigned automatically
to him. If later USER_B try to import an updated version of the
solution he gets the error message because is not the owner of the
workflow(s).
First time you give your solution to be imported, is USER_A to
perform the operation and all the workflows are assigned
automatically to him. Meanwhile one or more workflows are assigned to
USER_C. If later USER_A try to import an updated version of the
solution he gets the error message because is not the owner of the
workflow(s).
Before a workflow can be updated must be first deactivated, and only the owner can deactivate a workflow. This is by design.
In your case the production admin must be the owner of the processes (he can assign temporarily the workflows to himself, import the solution and after assign back to the right user) or needs to be the owner of the workflows to import the solution (if he has the rights)
A couple of additional points for clarity for the OP:
The owner of the workflows in your dev environment is not relevant, the importing user in prod will become the owner (this does not contradict Guido, I'm just making sure you don't follow a red herring). It is quite right for their to be an "air gap" between dev and prod.
If you know which workflows are in your solution, assign those in prod to yourself, then import, then if and only if you need to, reassign them to the original owner(s).
You may not need to if that owner is just an equivalent system admin user, but if it is a special user (eg "Workflow daemon" so users can see why it updated their records) you will want to re-assign.
Note that after re-assigning them, that user has to activate the workflows. You cannot activate a workflow in someone else's name (or users could write workflows to run as admins and elevate their priviledges).
If the workflows have not actually been changed in this version of your solution, take them out of the solution and ignore them - often I find that a workflow has been written, carried across to production in the original "go live" and is then working perfectly fine, but is left in the solution which is constantly updated and re-published (ie export / imported).
Personally I often have a "go live" solution (or more than one, but that's a different thread...) and then we start all over again with a new solution which only contains incremental changes thereafter. This means that all your working workflows, plugins, web resources etc do not appear in that solution so this avoids confusion as to versions, reduces solution bloat, and avoids this problem of workflow ownership. If a workflow is actually updated, then you need to deal with the import, but don't make this a daily occurrence for unrelated changes.

mqsvc.exe pegs cpu at full usage when deploying nservicebus to production

When I deployed my site that uses nservice to a new production box, it was unusably slow...
After some debugging I discovered that mqsvc.exe was taking up 50% of the CPU usage and the other 50% was being taken up by w3wp.exe
I found this post here:
http://geekswithblogs.net/michaelstephenson/archive/2010/05/07/139717.aspx
which recommended the following:
Make sure you set the windows service for NserviceBus Generic Host to the right credentials
Make sure you have the queue set with the right permissions
Make sure you turn on the right logging configuration in NServiceBus
So I figured the issue was something related to permissions, but even after trying to set the permissions correctly (I thought) I still wasn't able to resolve the issue.
If you allow NServiceBus to create its own queues, then it will create them with the correct permissions it needs.
The problem comes in when you set up a web application, and then the queues are created, and then the identity the application runs under changes. Then you get exactly this problem. NServiceBus tries to check the queue for a message, it does not have access to do so, so it immediately retries over and over, and you spike the processor.
The fix: Delete the queue. Restart the web application. NServiceBus takes over.
Edit: As noted in the comments, NServiceBus 3.x doesn't invoke the installers by default, which means queues are not automatically created in production unless you ask it to. See the documentation page on Installers for more detail.
For a web application (or any other situation where you're not using NServiceBus.Host) you can invoke the installers as part of the fluent config. There is a full example in the NServiceBus download, but here is a link to the relevant file on GitHub.
The issue did end up being that the website needed to be granted explicit permissions to the queues.
I found a number of resources online telling me this, but I still had to spend a good amount of time monkeying around with exactly WHICH account needed access... turned out that since my application pools were set to run as ApplicationPoolIdentity, I need to grant the account permissions by adding the following account to the nservicebus queue:
IIS AppPool\{APP POOL NAME}
I granted full access rights, though I'm sure you could refine that a bit if you needed to.
Hopefully, this will help anyone who runs into the same issues.
(This is my first attempt at the "Answer your own question" mechanism so please let me know if I am doing something wrong..)

perforce: controlling permissions without involving super user access

We are using perforce in my company and heavily rely on it. I need some suggestion for the following scenario:
Our Depot structure is something like this:
//depot
/product1
/component1
/component2
.
.
/componentN
/*.java
/*.xml
/product2
/component1
/component2
.
.
/componentN
/*.java
/*.xml
Every product has multiple components and every component consist of java or xml or some other program file. Every component has a manager/owner associated with it.
Right now, we have blocked the write permissions for every user and only when it is approved by the manager/owner after code review, we open the write permission for that user for any file/folder to check in. This process becomes a little untidy because the manager/developer have to wait for perforce admin to allow permissions (update protections table of perforce). Also, we give them a window of only 24 hrs to check in (due to agile, which i dont understand much :)), after which we are supposed to block the write access again for that user.
What I am looking for is a mechanism where perforce admins can delegate this responsibility to respective managers/owners without giving them super user or admin access and which automatically disables the write permission after 24 hrs.
Any suggestions ?
Thanks in advance.
There's nothing to do this out of the box, per se.
The closest thing I can think of is if the mainline version of these components were permissioned by a group with an owner. The owner of the group is allowed to add and remove members from the group, thus delegating the permissioning to the "gatekeeper" rather than the admins, themselves.
Let me know if you require further clarification about this.
One common solution is to build a simple tool which reads and writes the protections table, the group memberships, etc., to implement the policies that you desire.
The protections and groups data are not complex in format, and you can easily write a little bit of text-processing code that writes and re-writes these specs according to your needs.
Then install your tool on the server machine in a secure fashion, granting the tool the rights to update the protections table, and have your component administrators use the tool to manage the permissions.
For example, I've seen this done by writing a small web application, in Java or Perl for example, installing that on a web server on a secure machine, and letting the component admins operate that tool through a web interface.
All your tool has to provide is (a) a simple login/logout mechanism for your component admins (the web server may already do this for you), (b) a command that takes a user name and a folder name and grants permission, and (c) a command (or a timer) that removes that permissions subsequently.

Can Microsoft Windows Workflow route to specific workstations?

I want to write a workflow application that routes a link to a document. The routing is based upon machines not users because I don't know who will ever be at a given post. For example, I have a form. It is initially filled out in location A. I now want it to go to location B and have them fill out the rest. Finally, it goes to location C where a supervisor will approve it.
None of these locations has a known user. That is I don't know who it will be. I only know that whomever it is is authorized (they are assigned to the workstation and are approved to be there.)
Will Microsoft Windows Workflow do this or do I need to build my own workflow based on SQL Server, IP Addresses, and so forth?
Also, How would the user at a workstation be notified a document had been sent to their machine?
Thanks for any help.
I think if I was approaching this problem workflow would work to do it. It is a state machine you want that has three states:
A Start
B Completing
C Approving
However workflow needs to work in one central place (trust me on this, you only want to have one workflow run time running at once, otherwise the same bit of work can be done multiple times see our questions on MSDN forum). So a central server running the workflow is the answer.
How you present this to the users can be done in multiple ways. Dave suggested using an ASP.NET site to identify the machines that are doing the work, which is probably how I would do it. However you could also write a windows forms client that would do the same thing. This would require using something like SOAP / WCF to facilitate communication between client form applications and the central workflow service. This would have the advantage that you could use a system try icon to alert the user.
You might also want to look at human workflow engines, as they are designed to do things such as this (and more), I'm most familiar with PNMsoft's Sequence
You can design a generic "routing" workflow that will cause data to go to a workstation. The easiest way to do this would be to embed the workflow in an ASP.NET application. Each workstation should visit the application with a workstation ID in the querystring:
http://myapp/default.aspx?wid=01
When the form is filled out at workstation A, the workflow running in the web app can enter it into the "work bin" of the next workstation. Anyone sitting at the computer for which the form is destined will see it appear in their list of forms to review. You can use AJAX to make it slick and auto-updating.