What is inotify for OpenVMS? - inotify

What is inotify:
inotify is a Linux kernel subsystem
that acts to extend filesystems to
notice changes to the filesystem, and
report those changes to applications.
for OpenVMS?

The only thing I know of in OpenVMS that looks similar to what inotify does is the SET WATCH command.
See http://labs.hoffmanlabs.com/node/217 or http://www.openvms.compaq.com/wizard/wiz_1843.html for some basic info.

During the recent Technical Update Days in October 2011, I asked OpenVMS Engineering to port inotify to OpenVMS.
Here is the answer I got from Mandar, head of OpenVMS Engineering
We are currently analyzing this and would take further action based on initial study. Currently we are putting this in the wishlist for next release of OpenVMS.
Regards
Mandar
If you install Python on OpenVMS with the LD Images from http://www.vmspython.org/DownloadAndInstallationPython
then you can use the ptd routines
http://www.vmspython.org/VMSPTDExample

Another way would be to add an ACL to the file that generates a security audit event
An ACL is an access control list, it consists of ACE's (access control entries)
This would need to be done at different places. 1 is on the object you want to audit and then other is to enable the audit to be captured and reported
However, this auditing will catch the event but will not relay it to another application. The application that is interested in the event would need to access the audit logging facility to extract the event
On the file add an alarming ACE
You would add an ACE to the file and specify what conditions would trigger the security alarm to fire
For example, to add an alarm ACE to a file called SECUREFILE.DAT
$ SET SECURITY/ACL=(ALARM=SECURITY,ACCESS=READ+WRITE-
_$ +DELETE+CONTROL+FAILURE+SUCCESS) SECUREFILE.DAT
This will cause the file system to trigger an alarm every time the SECUREFILE.DAT file is accessed for READ or WRITE or DELETE and whether or not the file header is changed and whether or not an attempt for the above events were successful or not
Then you have to enable auditing to catch those security events that were fired
You do this through the set audit command. The documentation is extensive on this command as OpenVMS can audit a large number of events, from files, to queues to logical name tables etc
Here is a snippet from the VMS help
SET
AUDIT
Provides the management interface to the security auditing
system.
Requires the SECURITY privilege.
Format
SET AUDIT/qualifier
There are five categories of qualifiers, grouped by task, for the
SET AUDIT command:
Task Qualifiers Requirements
Define /AUDIT, Specify whether you are defining
auditing /ALARM, alarms (/ALARM), audits (/AUDIT),
events /CLASS, or both. Also specify whether you
/ENABLE, are enabling (/ENABLE) or disabling
/DISABLE (/DISABLE) the reporting of the
event.
Define /DESTINATION, Requires both the /DESTINATION and
auditing /JOURNAL, /JOURNAL qualifiers.
log file /VERIFY
Define /INTERVAL, None.
operational /LISTENER,
character- /SERVER,
istics of /VERIFY
the audit
server and
a listener
mailbox (if
any)
Define /ARCHIVE, None.
secondary /DESTINATION,
log file /VERIFY
Define /BACKLOG, With the /RESOURCE or /THRESHOLD
resource /EXCLUDE, qualifier, include the /JOURNAL
monitoring /JOURNAL, qualifier.
defaults /RESOURCE,
/THRESHOLD,
/VERIFY
Additional information available:
Qualifiers
/ALARM /ARCHIVE /AUDIT /BACKLOG /CLASS /DESTINATION /DISABLE /ENABLE /EXCLUDE /FAILURE_MODE
/INTERVAL /JOURNAL /LISTENER /RESOURCE /SERVER /THRESHOLD /VERIFY
Examples
Best to read up on the documentation
http://h71000.www7.hp.com/doc/83final/9996/9996pro_172.html

Related

how to collect all information about the current Job in Talend data studio

I'm Running any job then I want to log all information like ---
job name
Source detail and destination details (file name/Table name)
No of records input and number of records processed or save.
so I want log all the above information and insert into Mongodb using talend open studio Components also explain what component do I need to perform that task. need some serious response thanks.
You can use tJava component as below. Get the count of source, destination, details of the source name and target name. Now redirect the details to a file in tJava.
For more about logging functionalities, go through below tutorials,
https://www.youtube.com/watch?v=SSi8BC58v3k&list=PL2eC8CR2B2qfgDaQtUs4Wad5u-70ala35&index=2
I'd consider using log4j which has most of this information. Using MDC you could expand the log messages with custom attributes. Log4j does have a JSON format, and there seems to be a MongoDB appender as well.
It might take a bit more time to configure (I'd suggest adding the dependencies via a routine) but once configured it will require absolutely no configuration in the job. Using log4j you can create filters, etc.

Is it possible to combine userRolesHeader with roles defined in realm.properties?

So I'm sending all users through apache with mod_auth_kerb. All users come in with a default userRolesHeader of users.
I'd like to add extra roles for specific accounts, but I'm not seeing a good way to do that. If you could define the users in realm.properties and it would combine with the userRolesHeader, that would be useful.
Is there another way to do this? I don't see how it can be done with apache alone since REMOTE_USER isn't available during if/else logic processing.
#rundeck
rundeck.security.authorization.preauthenticated.userNameHeader=X-Forwarded-Uuid
rundeck.security.authorization.preauthenticated.userRolesHeader=X-Forwarded-Roles
#apache
RequestHeader set "X-Forwarded-Uuid" %{REMOTE_USER}s
RequestHeader set X-Forwarded-Roles users
Internally Rundeck gets only one method once, if you configure Rundeck to get the users from the realm.properties file, Rundeck seeks the roles from that file. Currently You can combine methods but the user/role in different methods doesn't.

Creating an atomic process for a netconf edit-config request

I am creating a custom system that, when a user submits a netconf edit-config, it will initiate a set of actions in my system that will atomically alter the configuration of our system and then submit a notification to the user of its success or failure.
Think of it as a big SQL transaction that, at the end, either commits or rolls back.
So, steps
User submits an edit-config
System accepts config and works to implement this config
If the config is successful, sends by a thumbs up response (not sure the formal way of doing this)
If the config is a failure, sends by a thumbs down response (and I will have to make sure the config is rolled back internally)
All this is done atomically. So, if a user submits two configs in a row, they won't conflict with each other.
Our working idea (probably not the best one) to implement this was to go about this by accepting the edit-config and then, within sysrepo, we would edit parts of our leafs with the success or failure flags and they would happen within the same session as the initial change. We were hoping this would keep everything atomic; by doing edits outside of the session, multiple configuration changes could conflict with each other.
We weren't sure to go about this with pure netconf or to leverage sysrepo directly. We noticed all these plugins/bindings made for sysrepo and figured those could be used directly to talk to our datastore.
But that said, our working idea is most likely not best-practice approach. What would be the best way to achieve this?
Our system is:
netopeer 1.1.27
sysrepo 1.4.58
libyang 1.0.167
libnetconf2 1.1.24
And our yang file is
module rxmbn {
namespace "urn:com:zug:rxmbn";
prefix rxmbn;
container rxmbn-config {
config true;
leaf raw {
type string;
}
leaf raw_hashCode {
type int32;
}
leaf odl_last_processed_hashCode {
type int32;
}
leaf processed {
type boolean;
default "false";
}
}
}
Currently we can:
Execute an edit-config to netopeer server
We can see the new config register in the sysrepo datastore
We can capture the moment sysrepo registers the data via sysrepo's API
But we are having problems
Atomically editing the datastore during the update session (due to locks, which is normal. In fact, if there is no way to edit during an update session, that is fine and not necessary. The main goal is the next bullet)
Atomically reacting to the new edit-config and responding to the end user
We are all a bit new to netconf and yang, so I am sure there is some way to leverage the notification api or event api either through the netopeer session or sysrepo, we just don't know enough yet.
If there are any examples or implementation advice to create an atomic transaction for this, that'd be really useful.
I know nothing of sysrepo so this is from a NETCONF perspective.
NETCONF severs process requests serially within a single session in a request-response fashion, meaning that everything you do within a single NETCONF session should already be "atomic" - you cannot send two requests and have them applied in reverse order or in parallel no matter what you do. A well behaved client would also wait for each response from the server before sending a new request, especially if all updates must execute successfully and in specific order. The protocol also defines no way to cancel a request already sent to a server.
If you need to prevent other sessions from modifying a datatstore while another session is performing a multi- edit-config, you use <lock> and <unlock> NETCONF operations to lock the entire datastore. There is also RFC5717 and partial lock, which would only lock a specific branch of the datastore.
Using notifications to report success of an <edit-config> would be highly unusual - that's what <rpc-reply> and <rpc-error> are there for within the same session. You would use notifications to inform other sessions about what's happening. In fact, there are standard base notifications for config changes.
I suggest reading the entire RFC6241 before proceeding further. There are things like candidate datastores, confirmed-commits, etc. you should know about.
Which component are you developing? Netconf client/manager or Netconf server?
In general, the Netconf server should implement individual Netconf RPC operations in an atomic way.
When a Netconf client wants to perform a set of operations in an atomic way, it should follow the procedure explained in Apendix E.1 in RFC 6241.

How DSpace process a query in jspui?

How any query is processed in DSpace and data is managed between front end and PostgreSQL
Like every other webapp running in a Servlet Container like Tomcat, the file WEB-INF/web.xml controls how a query is processed. In case of DSpace's JSPUI you'll find this file in [dspace-install]/webapps/jspui/WEB-INF/web.xml. The JSPUI defines several filters, listeners and servlets to process a request.
The filters are used to report that the JSPUI is running, that restricted areas can be seen by authenticated users or even by authenticated administrators only and to handle Content Negotiation.
The listeners ensure that DSpace has started correctly. During its start DSpace loads the configuration, opens database connections that it uses in a connection pool, let Spring do its IoC magic and so on.
For the beginning the most important part to see how a query is processed are the servlets and the servlet-mappings. A servlet-mapping defines which servlet is used to process a request with a specific request path: e.g. all requests to example.com/dspace-jspui/handle/* will be processed by org.dspace.app.webui.servlet.HandleServlet, all requests to example.com/dspace-jspui/submit will be processed by org.dspace.app.webui.servlet.SubmissionController.
The servlets uses their Java code ;-) and the DSpace Java API to process the request. You'll find most of it in the dspace-api module (see [dspace-source]/dspace-api/src/main/java/...) and some smaller part in dspace-services module ([dspace-source]dspace-services/src/main/java/...). Within the DSpace Java API their are two important classes if you're interested in the communication with the database:
One is org.dspace.core.Context. The context contains information whether and which user is logged in, an initialized and connected database connection (if all went well) and a cache. The methods Context.abort(), Context.commit() and Context.complete() are used to manage the database transaction. That is the reason, why almost all methods manipulating the database requests a Context as method parameter: it controls the database connection and the database transaction.
The other one is org.dspace.storage.rdbms.DatabaseManager. The DatabaseManager is used to handle database queries, updates, deletes and so on. All DSpaceObjects contains an object TableRow which contains the information of the object stored in the database. Inside the DSpaceObject classes (e.g. org.dspace.content.Item, org.dspace.content.Collection, ...) the TableRow may be manipulated and the changes stored back to the database by using DatabaseManager.update(Context, DSpaceObject). The DatabaseManager provides several methods to send SQL queries to the database, to update, delete, insert or even create data in the database. Just take a look to its API or look for "SELECT" it the DSpace source to get an example.
In JSPUI it is important to use Context.commit() if you want to commit the database state. If a request is processed and Context.commit() was not called, then the transaction will be aborted and the changes gets lost. If you call Context.complete() the transaction will be committed, the database connection will be freed and the context is marked as been finished. After you called Context.complete() the context cannot be used for a database connection any more.
DSpace is quite a huge project and their could be written a lot more about its ORM, the initialization of the database and so on. But this should already help you to start developing for DSpace. I would recommend you to read the part "Architecture" in the DSpace manual: https://wiki.duraspace.org/display/DSDOC5x/Architecture
If you have more specific questions you are always invited to ask them here on stackoverflow or on our mailing lists (http://sourceforge.net/p/dspace/mailman/) dspace-tech (for any question about DSpace) and dspace-devel (for question regarding the development of DSpace).
It depends on the version of DSpace you are running, along with your configuration.
In DSpace 4.0 or above, by default, the DSpace JSPUI uses Apache Solr for all searching and browsing. DSpace performs all indexing and querying of Solr via its Discovery module. The Discovery (Solr) based searche/indexing classes are available under the "org.dspace.discovery" package.
In earlier versions of DSpace (3.x or below), by default, the DSpace JSPUI uses Apache Lucene directly. In these older versions, DSpace called Lucene directly for all indexing and searching. The Lucene based search/indexing classes are available under the "org.dspace.search" package.
In both situations, queries are passed directly to either Solr or Lucene (again depending on the version of DSpace). The results are parsed and displayed within the DSpace UI.

PowerShell - what verb to use for a processing cmdlet?

Trying to find a standard.
The CmdLet will process data - multiple input, defined by parameters, into an output. Processing will take from a short time to mostly 5 to 15 minutes, while the system goes through a Lot of data and analyses it.
"Execute" gets me a warning, but none of the "common verbs" that I found seems appropriate. I find that no- so many open etc., but no "Process" or "Execute" or "Analyse".
Is there a specific standard verb I have overlooked?
Based on the information you provided, I would suggest Invoke. But you can find some useful discussion of Cmdlet Verbs in these links:
Cmdlet Verbs on MSDN
PowerShell: Approved Verbs (through v3.0)
Some key excerpts from the first link:
Invoke - Performs an action, such as running a command or a method.
Invoke vs. Start The Invoke verb is used to perform an operation that
is generally a synchronous operation, such as running a command. The
Start verb is used to begin an operation that is generally an
asynchronous operation, such as starting a process.
For a list of approved verbs, use the Get-Verb cmdlet. I often find this useful if I want to find an appropriate verb without schlepping to MSDN or Google (or Bing, or DuckDuckGo).
PS> Get-Verb
Verb Group
---- -----
Add Common
Clear Common
Close Common
Copy Common
Enter Common
Exit Common
Find Common
Format Common
Get Common
Hide Common
Join Common
Lock Common
Move Common
New Common
Open Common
Pop Common
Push Common
Redo Common
Remove Common
Rename Common
Reset Common
Search Common
Select Common
Set Common
Show Common
Skip Common
Split Common
Step Common
Switch Common
Undo Common
Unlock Common
Watch Common
Backup Data
Checkpoint Data
Compare Data
Compress Data
Convert Data
ConvertFrom Data
ConvertTo Data
Dismount Data
Edit Data
Expand Data
Export Data
Group Data
Import Data
Initialize Data
Limit Data
Merge Data
Mount Data
Out Data
Publish Data
Restore Data
Save Data
Sync Data
Unpublish Data
Update Data
Approve Lifecycle
Assert Lifecycle
Complete Lifecycle
Confirm Lifecycle
Deny Lifecycle
Disable Lifecycle
Enable Lifecycle
Install Lifecycle
Invoke Lifecycle
Register Lifecycle
Request Lifecycle
Restart Lifecycle
Resume Lifecycle
Start Lifecycle
Stop Lifecycle
Submit Lifecycle
Suspend Lifecycle
Uninstall Lifecycle
Unregister Lifecycle
Wait Lifecycle
Debug Diagnostic
Measure Diagnostic
Ping Diagnostic
Repair Diagnostic
Resolve Diagnostic
Test Diagnostic
Trace Diagnostic
Connect Communications
Disconnect Communications
Read Communications
Receive Communications
Send Communications
Write Communications
Block Security
Grant Security
Protect Security
Revoke Security
Unblock Security
Unprotect Security
Use Other
PS>