Anyone attempted to perform automated tasks through the PCOMM or x3270 using Perl? - perl

Anyone attempted to perform automated tasks through the PCOMM or x3270 using Perl? I am doing some operations on Mainframe through PCOMM and x3270. Since some tasks include many repeatable operations, I am trying to find an easy way to automate these tasks on Mainframe.
BTW, Perl is my favorite language, so just mentioned Perl here.

I am not mainframe guy, but check this out
http://www.perlmonks.org/?node=611038
"I automate 3270 applications from Perl by using the IBM Personal Communications 3270 terminal emulator on Win32 via Win32::OLE. It is very well documented and it works very good."
This with example code: http://www.perlmonks.org/?node_id=674214
Using IPC to drive the session:
http://www.linuxquestions.org/questions/linux-software-2/how-do-i-use-s3270-x3270-for-scripting-767067/
I hope this help.
regards,

You should do some research on QUOTE SITE FILETYPE=JES. This allows you to FTP batch jobs straight into the JES Spool. I do this dozens of times a day (maybe hundreds) to get my PC to accomplish tasks on the mainframe. If it can be done in batch, then this is a great way to do it. And of course, Perl is an excellent way to create and manipulate the JCL before it's submitted.
Another thing to look into, if you Telnet to the mainframe, it opens a TSO command dialog (just like option 6 in TSO). There are many things you can do there too. Of course, if you're doing IPLs and the like, you already know this.
My trouble is that I am not a systems programmer so I cannot control the settings of the mainframe. There are many settings that my company's systems guys are too lazy to look into, so they just shut them down out of hand. I discovered the Telnet thing about a year ago, which I was using to see if a job had finished (that's the hard part of this... knowing when the job is done). Next thing I know, the Telnet access had been disabled.
I have tons of things that allow me to do things on the mainframe via Perl. Hit me up and I'd love to share them with you.

Related

How to automate MacOS OS user interaction tests?

At work I am setting up a product that sets up and manages security policies on MacOs systems among others. Unfortunately I could not find in the documentation of this product exactly which OS mechanism is used for the application and local management of the policies, but I think for my question this knowledge is not essential.
I am looking for a solution to test the policy itself. Currently, I have to manually log in to the test system and manually call various apps and services to check if the policy blocks or allows the correct actions. Are there any tools/libraries in the Mac world to automate this task?
For GUI testing I found this library by a quick google https://github.com/google/EarlGrey/tree/earlgrey2. But I don't know if it is suitable for testing any apps/services in the sense of my use case. For example, would I have to find all the window ID's etc. by hand before I can write the test? Can I use them in my scenario at all?
Are there any other Swift/Objective-C libraries for this kind of tests? Or maybe even some in Ruby?
It would be ideal if this solution could also be integrated into a CI/CD pipeline.
Thanks a lot for your help!
You might be able to make your own set of test scripts based on some existing helper tools and scripts (potentially many different ones).
Some pointers are:
AppleScript - it allows automating GUI apps among other things
Automator and analogs
Alfred
For CI if you are able to wrap running your manual workflow in a shell script, that produces a well-defined output (an expected screenshot or a text file), then it could be a base for your test suite. This test suite itself could be coded in any language as long as it has access to the shell (Ruby, Python, etc. including bash/zsh itself).

Is there a way to automate simple repetitive tasks in mainframe terminal window?

My employer uses TN3270 Plus 2.07 version mainframe emulator. It is quite old version and does not support some of the scripting commands/features like waiting for screen refresh, "if" condition and simple computation instructions like incrementing etc, which are available in newer version. I need these features, so I can not use the builtin scripting.
It does not support DDE.
I was left with any options like VBScript, JScript or PowerShell (or any other option available in Windows 7 Enterprise with out installing third party tools like AutoIt).
What I need is, I want to be able to read some data from a file, enter that into mainframe terminal, wait until I receive a response from mainframe (it is quite random, some times instantanious but other times may take 20 to 30 seconds) i.e., wait for screen refresh. And then I want read the text from terminal window, depending on that information, I need to take some action like continue with reading next line from file loop or do some other thing.
Is there any way to achieve this?
Note: Changing the emulator or intalling 3rd party tools is not an option ;)
I have never tried it myself, but you might want to look at x3270 and specifically s3270 and possibly tcl3270:
http://sourceforge.net/projects/x3270/
Unless you are willing to put in the effort to write your own implementation of the 3270 protocol, some kind of 3rd party tool will be required. The question is one of cost in terms of time and licensing (with the above options, the only cost is time).
Of course, it may yet be possible to do with your existing emulator, but I am not familiar with it and the version does seem rather old.
You could use a scraping tool, like IBM HATS, or you could use some of the IBM Java TN3270 classes to talk to the mainframe.
Either case would have you making a TN3270 connection from your software. NOT scripting your emulator.
If you can get the mainframe software to a point where you can interact with it on a batch job level -- or you write some simple Rexx commands that interact with it -- you can use ftp protocol to submit jobs to issue commands to the mainframe software. It won't directly do a TN3270 session with it, but Rexx commands and/or other custom written programs could replace that interaction. Then you could just talk to the mainframe software using simple JCL.
Yes. UiPath is a general automation tool that has dedicated activities for working with terminals and green screens.
Right now it support via API:
Attachmate
Rocket Blue Zone
Rumba
IBM Personal Communications
IBM EHLL
for TN3270, TN5250 or VT terminal types.

How should I create an automated deployment script?

I need to create some way to get a local WAR file deployed on a Linux server. What I have been doing until now is the following process:
Upload WAR using WinSCP.
SSH into server using PuTTY.
Move/Rename/Delete certain files folders to prepare for WAR explosion.
Explode WAR.
Send email notifying users of restart.
Stop Tomcat server.
Use tail to make sure server stopped correctly.
Change symlink to point to exploded WAR.
Start Tomcat.
Use tail to make sure server started correctly.
Send email notifying users of completed restart.
This stuff is all relatively straightforward. And I'm sure there are a million and one different ways to do it. Id like to hear about some options. My first thought was a Bash script. I have very little experience with scripting in general but thought this would be a good way to learn. I would also be interested in doing this with Ruby/Python or something current like this as I have little to no experience with these languages. I think as a young developer, I should definitely get some sort of scripting language under my belt. I may also be interested in some sort of software solution that could do this stuff for me, although I think scripting would be a better way to go for the sake of ease and customizability (I might have just made that word up).
Some actual questions for those that made it this far. What language would you recommend to automate the process I've listed above? Would this be a good opportunity for me to learn Bash/Ruby/Python/something else, or should I simply take the 10 minutes to do this by hand 2-3 times a week? I would think the answer to this is obviously no. Can I automate these things from my computer, or will I need to setup the scripts to run within the Linux server? Is the email something I can automate or am I better off doing that part myself?
More questions will almost certainly come up as I do this so thanks to all in advance.
UPDATE
I should mention, I am using Maven to build the WAR. So if I can do all of this with Maven please let me know.
This might be too heavy duty for your needs, but have you looked at build automation tools such as CruiseControl or Hudson? You might also want to look at Integrity, which is more lightweight and written in Ruby (instead of Java like the other two I mentioned). These tools can do everything you said you needed in your question plus way, way more.
Edit
Since you want this to be more of a learning exercise in scripting languages than a practical solution, here's an idea for you. Instead of manually uploading your WAR each time to your server, set up a Mercurial repository on your server and create a hook (see here, here, and especially here) that executes a Ruby (or ant, or maven) script each time a changeset is pushed from a remote computer (i.e. your local workstation). You would write the script so it does all the action items in your list above. That way, you will get to learn three new things: a distributed version control paradigm, how to customize said tool, and how to write Ruby scripts to interact with your operating system (since your actions are very filesystem heavy).
The most common in my experience is ant, it's worth learning, it's all pretty simple, and very usefull.
You should definately automate it, and you should aim to have it happen in 1 step.
What are you using to build the WAR file itself? There's some advantage to using the same tool for build and deployment. On several projects I've used Ant to build a Java project and deploy it to the servers.

I'm designing a thick UI diagnostic tool, should it have a direct integration to PowerShell

My team and I are designing a diagnostic test tool as part of our next product. The test tool will exercise a request/response API and display asynchronous events. As part of the diagnostic tool suite, we will also be providing cmdlets for the entire product API.
Is it worth embedding PowerShell execution into the tool UI ? What are other development teams doing ?
The scripts can still run stand alone in any PowerShell window or tool. From a user's perspective, they would gain the ability to launch scripts from our UI. And, since the UI can be monitoring the same devices that the scripts act on, it brings some unity to the execution of a script and monitoring of the results. Embedding script execution brings more work to the project and I'm not sure how we want to handle displaying the results of the scripts.
Do most PowerShell users expect to run their scripts from their own shell environments or within tools that come from their product vendors ? Note, our diagnostic tool will not be automatically generating scripts for the users as some Microsoft tools do (it might be valuable for inexperienced PowerShell users, but we are expecting most scripts to be fairly simple, like executing a command on a series of devices).
Fortunately embedding the PowerShell engine and execute commands/scripts and getting the results back is pretty trivial. That said, I'm not sure you scenario is one where I would embed PowerShell. You ask if folks prefer to run scripts from their own shells or from within the Tool Vendors environment. I can't speak for everybody but the shells and editors that I use support some nifty features for debugging, code folding, syntax highlighting, multiple runspaces, etc. I'm not sure you would want to go through the effort to provide similar capabilities.
One reason to embed PowerShell is to execute the same PowerShell cmdlets as part of your core diagnostics and monitoring engine. That way you don't have to duplicate functionality between your diagnostic tool app engine and the cmdlets that your customers use for automation. It sounds like the code you use to do the diagnostics and monitoring in the app is different than the code in the cmdlets? Or is there common code shared between the app and the cmdlets?
Another reason to embed PowerShell is to allow the app itself to be scriptable but this doesn't appear to fit your scenario.
Another reason to embed PowerShell is if you are implementing a new host - ie you provide some unique editing or shell functionality. Some apps that do this are PowerGUI (which allows you to launch scripts IIRC) and PowerShell Plus.
Yet another reason I have embedded PowerShell in an application is because I knew I could get certain results in much less code than the equivalent C# code. This is a weaker reason and I probably wouldn't do this in a commercial app but I have used this for one-off programs.
I agree with both Jaykul and Keith Hill - the answer is yes.
There are several approaches you could use. But in general, I'd recommend you a) create key cmdlets as part of the UI for your app and b) you build the GUI on top of PowerShell (in the same way the Exchange team has done.
Doing this follows Microsoft's lead (all applications have to have a PowerShell interface) that is also being taken up by others (e.g. VMware, and even Symantec leverage PowerShell in their applications.
Creating cmdlets (and possibly a provider) is pretty straightforward - there's a great cmdlet designer recently released (see http://blogs.msdn.com/powershell/archive/2009/10/16/announcing-open-source-powershell-cmdlet-and-help-designer.aspx) for this tool.
Hope this helps!
Yeah, the main reason I'd consider actually embedding PowerShell in that scenario is if your UI could generate PowerShell scripts for the actions the users take in the UI, so they could see what was happening, and easily learn how to automate it. That would require designing the UI based on PowerShell from the beginning ... so it sounds to me like you're better off just providing the cmdlets and samples ;)

How to build an application on top of PowerShell?

Microsoft seems to be heavily pushing that their server applications (i.e SQL Server 2008, Exchange Server, etc) all have some type of PowerShell integration. The logic makes sense in that one can choose to manage the application from a GUI or CLI.
Therefore if one were to follow that trend and want to build an application that had a PowerShell interface, how would one even start?
Has anyone in the community done this type of thing? If so, what seems to be the best approach?
Update:
The UI needs to have a certain look/feel. Therefore, PowerGUI does not lend itself in this situation. However, I've used PowerGUI and do agree that it can help bridge gaps.
Part of the confusion is really whether or not hosting PowerShell is necessary in order to build an application on top of it. From what I've found, it is not (i.e. Cmdlet's). However, I have not seen anyone really discuss this in the answers yet.
Start here: Writing a Windows PowerShell Host Application
Exchange 2007 admin console hosts PS directly, and surfaces every UI action by showing a ubiquitous "and here's the PowerShell you just asked me to do" UI model). SQL Server 2005 & 8 admin consoles demo the concept of surfacing everything in a UI as scripts as a way of dogfooding scripting abilities (but there is little PowerShell support in SQL Server) (Distinction between Exchange and SQL Server's type of support added in response Shaw's comment, thanks)
PowerScripting podcast has a few interviews on topics like this. Also get-scripting podcast
I attended a PowerShell / MMC 3.0 Devlab at Microsoft a few years ago that taught how to do this very thing. The basic idea was to create the "management functionality" via a series of PowerShell cmdlets in a PSSnapin for your application. CLI oriented folks can just load the snapin and party on your cmdlets directly. For the GUI oriented, you build a MMC snapin that hosts a PowerShell runspace which, in response to GUI actions, executes the appropriate PowerShell cmdlets to tweak the application that is being managed. For bonus points, you display what PowerShell code will be executed by the MMC GUI such that the code can be copied and pasted into a script. There are plenty of examples out on the web that show how to host a PowerShell runspace in your (or the MMC) process and execute PowerShell script in that runspace and get back results.
This is an intriguing idea!
I haven't ever thought about it, and I have no idea if I think it's a good idea, but some creative things could be done.
For example, suppose you have some typical administrative-ish piece of software. Don't really care what, specifically. In a classic app dev't scenario, I'd typically try to generate a list of Command objects (things that'd implement some sort of ICommand), and then my UI would bind to those.
Suppose, now, that you were to instead create a cmdlet for each Command. The UI would more-or-less exist as a friendly interface for the core logic in the suite of cmdlets.
Yeah, ok, nothing new here. People've been doing this for a long time, building up GUIs around command-line tools. I think the key difference is that you'd instead be building up individual command line tools from the concept of the application itself. Heck, it might make more sense for both the application and the cmdlets to reference some shared library of commands instead of making the GUI sit on top of the cmdlets themselves.
Errr- sorry for the scatterbrained response. This answer was pretty much purely stream-of-consciousness. :)
You could try primal forms for building a complete application from a script or you need to build your application with an snappin cmdlets (the previous being what is used by sql, exchange etc.) but link to primail forms here
http://www.primaltools.com/products/info.asp?p=PrimalForms