PowerShell - what verb to use for a processing cmdlet? - powershell

Trying to find a standard.
The CmdLet will process data - multiple input, defined by parameters, into an output. Processing will take from a short time to mostly 5 to 15 minutes, while the system goes through a Lot of data and analyses it.
"Execute" gets me a warning, but none of the "common verbs" that I found seems appropriate. I find that no- so many open etc., but no "Process" or "Execute" or "Analyse".
Is there a specific standard verb I have overlooked?

Based on the information you provided, I would suggest Invoke. But you can find some useful discussion of Cmdlet Verbs in these links:
Cmdlet Verbs on MSDN
PowerShell: Approved Verbs (through v3.0)
Some key excerpts from the first link:
Invoke - Performs an action, such as running a command or a method.
Invoke vs. Start The Invoke verb is used to perform an operation that
is generally a synchronous operation, such as running a command. The
Start verb is used to begin an operation that is generally an
asynchronous operation, such as starting a process.

For a list of approved verbs, use the Get-Verb cmdlet. I often find this useful if I want to find an appropriate verb without schlepping to MSDN or Google (or Bing, or DuckDuckGo).
PS> Get-Verb
Verb Group
---- -----
Add Common
Clear Common
Close Common
Copy Common
Enter Common
Exit Common
Find Common
Format Common
Get Common
Hide Common
Join Common
Lock Common
Move Common
New Common
Open Common
Pop Common
Push Common
Redo Common
Remove Common
Rename Common
Reset Common
Search Common
Select Common
Set Common
Show Common
Skip Common
Split Common
Step Common
Switch Common
Undo Common
Unlock Common
Watch Common
Backup Data
Checkpoint Data
Compare Data
Compress Data
Convert Data
ConvertFrom Data
ConvertTo Data
Dismount Data
Edit Data
Expand Data
Export Data
Group Data
Import Data
Initialize Data
Limit Data
Merge Data
Mount Data
Out Data
Publish Data
Restore Data
Save Data
Sync Data
Unpublish Data
Update Data
Approve Lifecycle
Assert Lifecycle
Complete Lifecycle
Confirm Lifecycle
Deny Lifecycle
Disable Lifecycle
Enable Lifecycle
Install Lifecycle
Invoke Lifecycle
Register Lifecycle
Request Lifecycle
Restart Lifecycle
Resume Lifecycle
Start Lifecycle
Stop Lifecycle
Submit Lifecycle
Suspend Lifecycle
Uninstall Lifecycle
Unregister Lifecycle
Wait Lifecycle
Debug Diagnostic
Measure Diagnostic
Ping Diagnostic
Repair Diagnostic
Resolve Diagnostic
Test Diagnostic
Trace Diagnostic
Connect Communications
Disconnect Communications
Read Communications
Receive Communications
Send Communications
Write Communications
Block Security
Grant Security
Protect Security
Revoke Security
Unblock Security
Unprotect Security
Use Other
PS>

Related

PowerShell Approved Verbs for "Archive" and "Unarchive" of Data Items

I have data that supports being Archived and Unarchived but none of the Approved Verbs for PowerShell Commands for data management or resource lifecycle seem to be a good fit.
Technically, the relevant data items are actually available over RESTful API and are referenced by ID. The Cmdlets I'm building speak to said API.
EDIT: These data items are more accurately described as records with the act of archiving being some form of recategorisation or relabelling of said records as being in an archived state.
Which verbs are most appropriate and what are some of the implementation factors and considerations that should be taken into account when choosing?
New-DataArchive and Remove-DataArchive
Not sure of the particulars of the underlying API, but often there's a POST (new) and a DELETE (remove).
I'm also a big fan of adding [Alias]s when there's not a great match. For example, I was recently working in a git domain where Fork is a well-known concept, so I picked the "closest" approved verb, but added an alias to provide clarity (aliases can be whatever you want)
function Copy-GithubProject {
[Alias("Fork-GithubProject")]
[CmdletBinding()]
I think this comes down to user-experience (user using said Cmdlets) versus actual-implementation. The Approved Verbs for PowerShell Commands article refers mostly to the actual-implementation when describing verbs, not so much the user-experience of those using Cmdlets. I think choosing PowerShell Verbs based on actual implementation, instead of abstracting that away and focusing on the common-sense user-experience, is the way the Approved PowerShell Verb list should be used.
Set (s): Replaces data on an existing resource or creates a resource that contains some data...
Get (g): Specifies an action that retrieves a resource. This verb is paired with Set.
Although the user maybe archiving something, they may only actually be changing a label or an archive-bit on the resource. In my case, the 'Archiving' is actually just a flag on a row on a backend database, meaning, it is replacing data on an existing resource, so Set-ArchiveState (or equivalent) as Seth suggested is the most appropriate here.
New vs. Set
Use the New verb to create a new resource. Use the Set verb to modify an existing resource, optionally creating it if it does not exist, such as the Set-Variable cmdlet.
...
New (n): Creates a resource. (The Set verb can also be used when creating a resource that includes data, such as the Set-Variable cmdlet.)
I think New would only be applicable if you are creating a new resource based off of the old ones, the new resource representing an archived copy. In my use-case, it's Archival of a resource is represented by a flag, I am changing data on an existing resource primarily, thus, New isn't suitable here.
Publish (pb): Makes a resource available to others. This verb is paired with Unpublish.
Unpublish (ub): Makes a resource unavailable to others. This verb is paired with Publish.
There is an argument to be made that, if Archiving/Unarchiving restricts availability of the resource, Publish/Unpublish would be appropriate but I think this negatively impacts the user-experience even more than Set/Get does by using terminology in an uncommon way.
Compress (cm): Compacts the data of a resource. Pairs with Expand.
Expand (en): Restores the data of a resource that has been compressed to its original state. This verb is paired with Compress.
This is quite implementation specific and I think would only be suitable if the main purpose of the Archive/Unarchive action is for data compression and not for resource lifecycle management.

Creating an atomic process for a netconf edit-config request

I am creating a custom system that, when a user submits a netconf edit-config, it will initiate a set of actions in my system that will atomically alter the configuration of our system and then submit a notification to the user of its success or failure.
Think of it as a big SQL transaction that, at the end, either commits or rolls back.
So, steps
User submits an edit-config
System accepts config and works to implement this config
If the config is successful, sends by a thumbs up response (not sure the formal way of doing this)
If the config is a failure, sends by a thumbs down response (and I will have to make sure the config is rolled back internally)
All this is done atomically. So, if a user submits two configs in a row, they won't conflict with each other.
Our working idea (probably not the best one) to implement this was to go about this by accepting the edit-config and then, within sysrepo, we would edit parts of our leafs with the success or failure flags and they would happen within the same session as the initial change. We were hoping this would keep everything atomic; by doing edits outside of the session, multiple configuration changes could conflict with each other.
We weren't sure to go about this with pure netconf or to leverage sysrepo directly. We noticed all these plugins/bindings made for sysrepo and figured those could be used directly to talk to our datastore.
But that said, our working idea is most likely not best-practice approach. What would be the best way to achieve this?
Our system is:
netopeer 1.1.27
sysrepo 1.4.58
libyang 1.0.167
libnetconf2 1.1.24
And our yang file is
module rxmbn {
namespace "urn:com:zug:rxmbn";
prefix rxmbn;
container rxmbn-config {
config true;
leaf raw {
type string;
}
leaf raw_hashCode {
type int32;
}
leaf odl_last_processed_hashCode {
type int32;
}
leaf processed {
type boolean;
default "false";
}
}
}
Currently we can:
Execute an edit-config to netopeer server
We can see the new config register in the sysrepo datastore
We can capture the moment sysrepo registers the data via sysrepo's API
But we are having problems
Atomically editing the datastore during the update session (due to locks, which is normal. In fact, if there is no way to edit during an update session, that is fine and not necessary. The main goal is the next bullet)
Atomically reacting to the new edit-config and responding to the end user
We are all a bit new to netconf and yang, so I am sure there is some way to leverage the notification api or event api either through the netopeer session or sysrepo, we just don't know enough yet.
If there are any examples or implementation advice to create an atomic transaction for this, that'd be really useful.
I know nothing of sysrepo so this is from a NETCONF perspective.
NETCONF severs process requests serially within a single session in a request-response fashion, meaning that everything you do within a single NETCONF session should already be "atomic" - you cannot send two requests and have them applied in reverse order or in parallel no matter what you do. A well behaved client would also wait for each response from the server before sending a new request, especially if all updates must execute successfully and in specific order. The protocol also defines no way to cancel a request already sent to a server.
If you need to prevent other sessions from modifying a datatstore while another session is performing a multi- edit-config, you use <lock> and <unlock> NETCONF operations to lock the entire datastore. There is also RFC5717 and partial lock, which would only lock a specific branch of the datastore.
Using notifications to report success of an <edit-config> would be highly unusual - that's what <rpc-reply> and <rpc-error> are there for within the same session. You would use notifications to inform other sessions about what's happening. In fact, there are standard base notifications for config changes.
I suggest reading the entire RFC6241 before proceeding further. There are things like candidate datastores, confirmed-commits, etc. you should know about.
Which component are you developing? Netconf client/manager or Netconf server?
In general, the Netconf server should implement individual Netconf RPC operations in an atomic way.
When a Netconf client wants to perform a set of operations in an atomic way, it should follow the procedure explained in Apendix E.1 in RFC 6241.

HTTP GET for 'background' job creation and acquiring

I'm designing API for jobs scheduler. There is one scheduler with some set of resources and DB tables for them. Also there are multiple 'workers' that request 'jobs' from scheduler. Worker can't create job it must only request it. Job must be calculated on the server side. Also job is a dynamic entity and calculated using multiple DB tables and time. There is no 'job' table.
In general this system is very similar to task queue. But without queue. I need a method for worker to request next task. That task should be calculated and assigned for this agent.
Is it OK to use GET verb to retrieve and 'lock' job for the specific worker?
In terms of resources this query does not modify anything. Only internal DB state is updated. For client it looks like fetching records one by one. It doesn't know about internal modifications.
In pure REST style I probably should define a job table and CRUD api for it. Then I would need to create some auxilary service to POST jobs to that table. Then each agent would list jobs using GET and then lock it using PATCH. That approach requires multiple potential retries due to race-conditions. (Job can be already locked by another agent). Also it looks a little bit complicated if I need to assign job to specific agent based on server side logic. In that case I need to implement some check logic on client side to iterate through jobs based on different responces.
This approach looks complicated.
Is it OK to use GET verb to retrieve and 'lock' job for the specific worker?
Maybe? But probably not.
The important thing to understand about GET is that it is safe
The purpose of distinguishing between safe and unsafe methods is to
allow automated retrieval processes (spiders) and cache performance
optimization (pre-fetching) to work without fear of causing harm. In
addition, it allows a user agent to apply appropriate constraints on
the automated use of unsafe methods when processing potentially
untrusted content.
If aggressive cache performance optimization would make a mess in your system, then GET is not the http method you want triggering that behavior.
If you were designing your client interactions around resources, then you would probably have something like a list of jobs assigned to a worker. Reading the current representation of that resource doesn't require that a server change it, so GET is completely appropriate. And of course the server could update that resource for its own reasons at any time.
Requests to modify that resource should not be safe. For instance, if the client is going to signal that some job was completed, that should be done via an unsafe method (POST/PUT/PATCH/DELETE/...)
I don't have such resource. It's an ephymeric resource which is spread across the tables. There is no DB table for that and there is no ID column to update that job. That's another question why I don't have such table but it's current requirement and limitation.
Fair enough, though the main lesson still stands.
Another way of thinking about it is to think about failure. The network is unreliable. In a distributed environment, the client cannot distinguish a lost request from a lost response. All it knows is that it didn't receive an acknowledgement for the request.
When you use GET, you are implicitly telling the client that it is safe (there's that word again) to resend the request. Not only that, but you are also implicitly telling any intermediate components that it is safe to repeat the request.
If there are no adverse effects to handling multiple copies of the same request, the GET is fine. But if processing multiple copies of the same request is expensive, then you should probably be using POST instead.
It's not required that the GET handler be safe -- the standard only describes the semantics of the messages; it doesn't constraint the implementation at all. But any loss of property incurred is properly understood to be the responsibility of the server.

Service which provides an interface to an async service and Idempotency violation

Please keep in mind i have a rudimentary understanding of rest and building services. I am asking this question mostly cause i am trying to decouple a service from invoking a CLI(within the same host) by providing a front to run async jobs in a scalable way.
I want to build a service where you can submit an asynchronous job. The service should be able to tell me status of the job and location of the results.
APIs
1) CreateAsyncJob
Input: JobId,JobFile
Output: 200Ok (if job was submitted successfully)
2) GetAsyncJobStatus
Input: JobId
Output: Status(inProgress/DoesntExist/Completed/Errored)
3)GetAsyncJobOutput
Input: JobId
Output: OutputFile
Question
The second API, GetAsyncJobStatus violates the principles of idempotency.
How is idempotency preserved in such APIs where we need to update the progress of a particular job ?
Is Idempotency a requirement in such situations ?
Based on the link here idempotency is a behaviour demonstrated by an API by producing the same result during it's repeated invocations.
As per my understanding idempotency is at per API method level ( we are more concerned about what would happen if a client calls this API repeatedly). Hence the best way to maintain idempotency would be to segregate read and write operations into separate APIs. This way we can reason more throughly with the idempotent behavior of the individual API methods. Also while this term is gaining traction with RESTful services, the principles hold true even for other API systems.
In the use case you have provided the response to the API call made by the client would differ (depending upon the status of the job).Assuming that this API is read-only and does not perform any write operations on the server, the state on the server would remain the same by invoking only this API - for e.g. if there were 10 jobs in the system in varied states calling this API 100 times for a job id could result in different status every time for the job id (based on it's progress) - however the number of jobs on the server and their corresponding states would be the same.
However if this API were to be implemented in a way that would alter the state of the server in some way - then this call is not necessarily idempotent.
So keep two APIs - getJobStatus(String jobId) and updateJobStatus(String jobId). The getJobStatus is idempotent while updateJobStatus is not.
Hope this helps

What is inotify for OpenVMS?

What is inotify:
inotify is a Linux kernel subsystem
that acts to extend filesystems to
notice changes to the filesystem, and
report those changes to applications.
for OpenVMS?
The only thing I know of in OpenVMS that looks similar to what inotify does is the SET WATCH command.
See http://labs.hoffmanlabs.com/node/217 or http://www.openvms.compaq.com/wizard/wiz_1843.html for some basic info.
During the recent Technical Update Days in October 2011, I asked OpenVMS Engineering to port inotify to OpenVMS.
Here is the answer I got from Mandar, head of OpenVMS Engineering
We are currently analyzing this and would take further action based on initial study. Currently we are putting this in the wishlist for next release of OpenVMS.
Regards
Mandar
If you install Python on OpenVMS with the LD Images from http://www.vmspython.org/DownloadAndInstallationPython
then you can use the ptd routines
http://www.vmspython.org/VMSPTDExample
Another way would be to add an ACL to the file that generates a security audit event
An ACL is an access control list, it consists of ACE's (access control entries)
This would need to be done at different places. 1 is on the object you want to audit and then other is to enable the audit to be captured and reported
However, this auditing will catch the event but will not relay it to another application. The application that is interested in the event would need to access the audit logging facility to extract the event
On the file add an alarming ACE
You would add an ACE to the file and specify what conditions would trigger the security alarm to fire
For example, to add an alarm ACE to a file called SECUREFILE.DAT
$ SET SECURITY/ACL=(ALARM=SECURITY,ACCESS=READ+WRITE-
_$ +DELETE+CONTROL+FAILURE+SUCCESS) SECUREFILE.DAT
This will cause the file system to trigger an alarm every time the SECUREFILE.DAT file is accessed for READ or WRITE or DELETE and whether or not the file header is changed and whether or not an attempt for the above events were successful or not
Then you have to enable auditing to catch those security events that were fired
You do this through the set audit command. The documentation is extensive on this command as OpenVMS can audit a large number of events, from files, to queues to logical name tables etc
Here is a snippet from the VMS help
SET
AUDIT
Provides the management interface to the security auditing
system.
Requires the SECURITY privilege.
Format
SET AUDIT/qualifier
There are five categories of qualifiers, grouped by task, for the
SET AUDIT command:
Task Qualifiers Requirements
Define /AUDIT, Specify whether you are defining
auditing /ALARM, alarms (/ALARM), audits (/AUDIT),
events /CLASS, or both. Also specify whether you
/ENABLE, are enabling (/ENABLE) or disabling
/DISABLE (/DISABLE) the reporting of the
event.
Define /DESTINATION, Requires both the /DESTINATION and
auditing /JOURNAL, /JOURNAL qualifiers.
log file /VERIFY
Define /INTERVAL, None.
operational /LISTENER,
character- /SERVER,
istics of /VERIFY
the audit
server and
a listener
mailbox (if
any)
Define /ARCHIVE, None.
secondary /DESTINATION,
log file /VERIFY
Define /BACKLOG, With the /RESOURCE or /THRESHOLD
resource /EXCLUDE, qualifier, include the /JOURNAL
monitoring /JOURNAL, qualifier.
defaults /RESOURCE,
/THRESHOLD,
/VERIFY
Additional information available:
Qualifiers
/ALARM /ARCHIVE /AUDIT /BACKLOG /CLASS /DESTINATION /DISABLE /ENABLE /EXCLUDE /FAILURE_MODE
/INTERVAL /JOURNAL /LISTENER /RESOURCE /SERVER /THRESHOLD /VERIFY
Examples
Best to read up on the documentation
http://h71000.www7.hp.com/doc/83final/9996/9996pro_172.html