We have a (spacewalk) rhn satellite server running and several hundred machines registered with it with a few activation keys. Each of these keys have one or more child channels. We need to know how to 'refresh' the registration so that the clients can have any new child channels that we add to be available to them. We would like to be able to write a script that will check the 'currentness' of its registration automatically on a regular basis, but can't work out how to do it without re-registering the machine which would leave too many defunct profiles on the server side. Any suggestions or help would be appreciated.
The activiation key is only used when the system is registered. If you want to change the child channels for a system or systems, you can manage them via the SSM in the UI to do this, or via the API.
The only way to use an activation key to manage channels after registration is to re-register the system with the same activation key.
Related
This is a question related to designing command handling with Axon 4.
Let say I've a domain that model the concept of a Payment.
The actual payment will be done by an external Partner. I want to track it in my system via the following events: a Payment Request Was Issued followed by either
Partner Agreed the Payment or Partner Declined the Payment.
Every events issued by the command should be enrolled in the same database transaction.
What would be the best practice to actually call my partner in Axon 4 ?
Here's what I've done so far:
Have one command named RequestPaymentCommand
This command will be handled by a Payment Aggregate like this:
do some checks
apply the event PaymentRequestWasIssued
and then, call the external partner and given the result it will apply either PaymentAccepted or PaymentRefused
In this answer from stackoverflow, it is said that
All the data that you need to apply the event should normally be available in the command
With this statement in mind, I understand that I should create as much Commands as Events ? But In this case, what is the point of all theses commands ? Should I end up with something like:
My command RequestPaymentCommand will generate the PaymentRequestWasIssued event.
Then from somewhere I call my partner and then send another command (how to name it ?) that will generate the event given the result from the partner ?
The actual payment will be done by an external Partner
This means that your application is not the source of truth and it should not try to behave like one. This means that it should only observe what is happening in the remote system and possible react to remote events. To "observe" could mean to duplicate/copy the remote events in local databases, without modifications, just for cache reasons or for display reasons. Your system should not directly give other interpretations to these events, other than those given by their source.
After the remote events are copied locally, your system could react to them. This could mean that a Saga, after receives the Partner Agreed the Payment it sends a UnlockFeature command to a local Aggregate (see DDD).
With this statement in mind, I understand that I should create as much Commands as Events ? But In this case, what is the point of all theses commands ?
This is an indication that those are not your events: you should not emit them from your code; in the worst case you store them and react to them (in a Saga/Process manager). This means that you should discover the local business processes and model them as such: they react to events by sending commands.
I'm a casual and mostly inexperienced LSF user, so please bear with...
I develop software in a corporate setting that submits jobs to LSF for processing. We have a set of machines that we want to use for a specific application but not open up to the public at large for any other usage. There is something in place now that allowsa few specific users access to use the machines. But we also want any user to use them IF they are running a certain application (a shell script that runs a perl script in this case).
I suppose registering the application(s) would be one approach. Another might be to pass a secret/encrypted token or key. Or maybe there are other mechanisms for this.
Is there an LSF based solution for this ?
Thanks
There's a couple of LSF features that can help here. A queue or application profile can have dedicated hosts, and users (the HOSTS and USERS parameters).
Queues can have a job starter to check and reject invalid job commands.
me again..
I had done all the sensu-uchiwa-graphite set up. And i get a new request,:(. Rather than go to change the threshold in check.json file on sensu server..any plugin at the UCHIWA that this adjustment will be shown in Uchiwa dashboard? I asked because in case that my application teams wanna change it by themselves without accessing to server.
I think sensu-admin in enterprise is available but we need to pay big money per year ;(...
Thanks in advance to help.
Sumana W.
This is fairly doable if you use a configuration management system like Chef/Ansible/Puppet - especially if you run standalone checks on the sensu-client.
This allows the clients to define their own thresholds, rather than changing the sensu servers themselves.
See https://sensuapp.org/docs/latest/reference/checks.html#standalone-checks
In this case, the definitions for the checks are sitting on the client servers and they have the choice of their thresholds or configurations. The client itself manages how often to run the check and sends the output back to the server, rather than the server requesting the checks. This helps quite a bit as far as scaling or multitenancy.
The other way to accomplish this, if you are tied to serverside checks, would be to use client attributes (https://sensuapp.org/docs/0.25/reference/checks.html#check-token-substitution)
For example, you can have a cpu check that says something like check-cpu.sh -w :::cpu_warn::: -c :::cpu_critical::: and these come from a cpu_warn and cpu_critical value from the client.json on the client server.
Source: We use sensu extensively in an enterprise environment across thousands of hosts and have been working through these same issues.
We are using perforce in my company and heavily rely on it. I need some suggestion for the following scenario:
Our Depot structure is something like this:
//depot
/product1
/component1
/component2
.
.
/componentN
/*.java
/*.xml
/product2
/component1
/component2
.
.
/componentN
/*.java
/*.xml
Every product has multiple components and every component consist of java or xml or some other program file. Every component has a manager/owner associated with it.
Right now, we have blocked the write permissions for every user and only when it is approved by the manager/owner after code review, we open the write permission for that user for any file/folder to check in. This process becomes a little untidy because the manager/developer have to wait for perforce admin to allow permissions (update protections table of perforce). Also, we give them a window of only 24 hrs to check in (due to agile, which i dont understand much :)), after which we are supposed to block the write access again for that user.
What I am looking for is a mechanism where perforce admins can delegate this responsibility to respective managers/owners without giving them super user or admin access and which automatically disables the write permission after 24 hrs.
Any suggestions ?
Thanks in advance.
There's nothing to do this out of the box, per se.
The closest thing I can think of is if the mainline version of these components were permissioned by a group with an owner. The owner of the group is allowed to add and remove members from the group, thus delegating the permissioning to the "gatekeeper" rather than the admins, themselves.
Let me know if you require further clarification about this.
One common solution is to build a simple tool which reads and writes the protections table, the group memberships, etc., to implement the policies that you desire.
The protections and groups data are not complex in format, and you can easily write a little bit of text-processing code that writes and re-writes these specs according to your needs.
Then install your tool on the server machine in a secure fashion, granting the tool the rights to update the protections table, and have your component administrators use the tool to manage the permissions.
For example, I've seen this done by writing a small web application, in Java or Perl for example, installing that on a web server on a secure machine, and letting the component admins operate that tool through a web interface.
All your tool has to provide is (a) a simple login/logout mechanism for your component admins (the web server may already do this for you), (b) a command that takes a user name and a folder name and grants permission, and (c) a command (or a timer) that removes that permissions subsequently.
I want to write a workflow application that routes a link to a document. The routing is based upon machines not users because I don't know who will ever be at a given post. For example, I have a form. It is initially filled out in location A. I now want it to go to location B and have them fill out the rest. Finally, it goes to location C where a supervisor will approve it.
None of these locations has a known user. That is I don't know who it will be. I only know that whomever it is is authorized (they are assigned to the workstation and are approved to be there.)
Will Microsoft Windows Workflow do this or do I need to build my own workflow based on SQL Server, IP Addresses, and so forth?
Also, How would the user at a workstation be notified a document had been sent to their machine?
Thanks for any help.
I think if I was approaching this problem workflow would work to do it. It is a state machine you want that has three states:
A Start
B Completing
C Approving
However workflow needs to work in one central place (trust me on this, you only want to have one workflow run time running at once, otherwise the same bit of work can be done multiple times see our questions on MSDN forum). So a central server running the workflow is the answer.
How you present this to the users can be done in multiple ways. Dave suggested using an ASP.NET site to identify the machines that are doing the work, which is probably how I would do it. However you could also write a windows forms client that would do the same thing. This would require using something like SOAP / WCF to facilitate communication between client form applications and the central workflow service. This would have the advantage that you could use a system try icon to alert the user.
You might also want to look at human workflow engines, as they are designed to do things such as this (and more), I'm most familiar with PNMsoft's Sequence
You can design a generic "routing" workflow that will cause data to go to a workstation. The easiest way to do this would be to embed the workflow in an ASP.NET application. Each workstation should visit the application with a workstation ID in the querystring:
http://myapp/default.aspx?wid=01
When the form is filled out at workstation A, the workflow running in the web app can enter it into the "work bin" of the next workstation. Anyone sitting at the computer for which the form is destined will see it appear in their list of forms to review. You can use AJAX to make it slick and auto-updating.