At work I am setting up a product that sets up and manages security policies on MacOs systems among others. Unfortunately I could not find in the documentation of this product exactly which OS mechanism is used for the application and local management of the policies, but I think for my question this knowledge is not essential.
I am looking for a solution to test the policy itself. Currently, I have to manually log in to the test system and manually call various apps and services to check if the policy blocks or allows the correct actions. Are there any tools/libraries in the Mac world to automate this task?
For GUI testing I found this library by a quick google https://github.com/google/EarlGrey/tree/earlgrey2. But I don't know if it is suitable for testing any apps/services in the sense of my use case. For example, would I have to find all the window ID's etc. by hand before I can write the test? Can I use them in my scenario at all?
Are there any other Swift/Objective-C libraries for this kind of tests? Or maybe even some in Ruby?
It would be ideal if this solution could also be integrated into a CI/CD pipeline.
Thanks a lot for your help!
You might be able to make your own set of test scripts based on some existing helper tools and scripts (potentially many different ones).
Some pointers are:
AppleScript - it allows automating GUI apps among other things
Automator and analogs
Alfred
For CI if you are able to wrap running your manual workflow in a shell script, that produces a well-defined output (an expected screenshot or a text file), then it could be a base for your test suite. This test suite itself could be coded in any language as long as it has access to the shell (Ruby, Python, etc. including bash/zsh itself).
Related
My employer uses TN3270 Plus 2.07 version mainframe emulator. It is quite old version and does not support some of the scripting commands/features like waiting for screen refresh, "if" condition and simple computation instructions like incrementing etc, which are available in newer version. I need these features, so I can not use the builtin scripting.
It does not support DDE.
I was left with any options like VBScript, JScript or PowerShell (or any other option available in Windows 7 Enterprise with out installing third party tools like AutoIt).
What I need is, I want to be able to read some data from a file, enter that into mainframe terminal, wait until I receive a response from mainframe (it is quite random, some times instantanious but other times may take 20 to 30 seconds) i.e., wait for screen refresh. And then I want read the text from terminal window, depending on that information, I need to take some action like continue with reading next line from file loop or do some other thing.
Is there any way to achieve this?
Note: Changing the emulator or intalling 3rd party tools is not an option ;)
I have never tried it myself, but you might want to look at x3270 and specifically s3270 and possibly tcl3270:
http://sourceforge.net/projects/x3270/
Unless you are willing to put in the effort to write your own implementation of the 3270 protocol, some kind of 3rd party tool will be required. The question is one of cost in terms of time and licensing (with the above options, the only cost is time).
Of course, it may yet be possible to do with your existing emulator, but I am not familiar with it and the version does seem rather old.
You could use a scraping tool, like IBM HATS, or you could use some of the IBM Java TN3270 classes to talk to the mainframe.
Either case would have you making a TN3270 connection from your software. NOT scripting your emulator.
If you can get the mainframe software to a point where you can interact with it on a batch job level -- or you write some simple Rexx commands that interact with it -- you can use ftp protocol to submit jobs to issue commands to the mainframe software. It won't directly do a TN3270 session with it, but Rexx commands and/or other custom written programs could replace that interaction. Then you could just talk to the mainframe software using simple JCL.
Yes. UiPath is a general automation tool that has dedicated activities for working with terminals and green screens.
Right now it support via API:
Attachmate
Rocket Blue Zone
Rumba
IBM Personal Communications
IBM EHLL
for TN3270, TN5250 or VT terminal types.
I'm in a QA department of an internal development group. Our production database programmers have been building an SSIS package to create a load file from various database bits for import into a third-party application (we are testing integration with this).
Once built, it was quickly discovered that it had dependencies on the version of SQL Server and Visual Studio that it was created with, and had quite of few dependencies on the production environment as well (this is not an SSIS problem, just describing the nature of our setup).
Getting this built took several days of solid effort, and then would not run under our QA environment.
After asking that team for the SQL queries that their package was running (it works fine in the production environment), I wrote a python script that performed the same task without any dependencies. It took me a little over two hours (note that I already had a custom library for handling our database interaction), and I was able to write out a UTF-16LE file that I needed.
Now, our production database programmers are not SSIS experts, but they use it a fair bit in their workflows -- I would readily call all of them competent in their positions.
Thus, my question -- given the time it appears to take and the dependencies on the versions of SQL Server and Visual Studio, what advantage or benefits does an SSIS package bring that I may not see with my python code? Or a shell script, or Ruby or code-flavor-of-the-moment?
I am not an expert in SSIS by any means but an average developer who has experience working with SSIS for little over three years. Like any other software, there are short comings with SSIS as well but so far I have enjoyed working with SSIS. Selection of technology depends on one's requirement and preferences. I am not going to say SSIS is superior over other technologies. Also, I have not worked with Python, Ruby or other technologies that you have mentioned.
Here are my two cents. Please take this with a grain of salt.
From an average developer point of view, SSIS is easy to use once you understand the nuances of how to handle it. I believe that the same is true for any other technology. SSIS packages are visual work flows rather than a coding tool (of course, SSIS has excellent coding capabilities too). One can easily understand what is going on within a package by looking at the work flows instead of going through hundreds of lines of code.
SSIS is built mainly to perform ETL (Extract, Transform, Load) jobs. It is fine tuned to handle that functionality really well especially with SQL Server and not to mention that it can handle flat files, DB2, Oracle and other data sources as well.
You can perform most of the tasks with minimal or no coding. It can load millions of rows from one data source to another within few minutes. See this example demonstrating a package that loads a million rows from tab delimited file into SQL Server within 3 minutes.
Logging capabilities to capture every action performed by the package and its tasks. It helps to pinpoint the errors or track information about the actions performed by the package. This requires no coding. See this example for logging.
Check Points help to capture the package execution like a recorder and assists in restarting the package execution from the point of failure instead of running the package from the beginning.
Expressions can be used to determine the package flow depending on a given condition.
Package configurations can be set up for different environments using database or XML based dtsconfig files or Machine based Environment variables. See this example for Environment Variables based configuration. Points #4 - #7 are out-of-the-box features which require minor configuration and requires no coding at all.
SSIS can leverage the .NET framework capabilities and also developers can create their own custom components if they can't find a component that meets their requirement. See this example to understand how .NET coding can be best used along with different data source. This example was created in less than 3 hours.
SSIS can use the same data source for multiple transformations without having to re-read the data. See this example to understand what Multicasting means. Here is an example of how XML data sources can be handled.
SSIS can also integrate with SSRS (Reporting Services) and SSAS (Analysis Services) easily.
I have just listed very basic things that I have used in SSIS but there are lot of nice features. As I mentioned earlier, I am not sure if Python, Ruby or other languages can handle these tasks with such ease.
It all boils down to one's comfort with the technology. When the technology is new, people are very much skeptical and unwilling to adapt it.
In my experience, once you understand and embrace SSIS it is really a nice technology to use. It works really well with SQL Server. I don't deny the fact that I faced obstacles during development of my packages but mostly found a way to overcome them.
This may not be the answer that you were expecting but I hope this gives an idea.
My team and I are designing a diagnostic test tool as part of our next product. The test tool will exercise a request/response API and display asynchronous events. As part of the diagnostic tool suite, we will also be providing cmdlets for the entire product API.
Is it worth embedding PowerShell execution into the tool UI ? What are other development teams doing ?
The scripts can still run stand alone in any PowerShell window or tool. From a user's perspective, they would gain the ability to launch scripts from our UI. And, since the UI can be monitoring the same devices that the scripts act on, it brings some unity to the execution of a script and monitoring of the results. Embedding script execution brings more work to the project and I'm not sure how we want to handle displaying the results of the scripts.
Do most PowerShell users expect to run their scripts from their own shell environments or within tools that come from their product vendors ? Note, our diagnostic tool will not be automatically generating scripts for the users as some Microsoft tools do (it might be valuable for inexperienced PowerShell users, but we are expecting most scripts to be fairly simple, like executing a command on a series of devices).
Fortunately embedding the PowerShell engine and execute commands/scripts and getting the results back is pretty trivial. That said, I'm not sure you scenario is one where I would embed PowerShell. You ask if folks prefer to run scripts from their own shells or from within the Tool Vendors environment. I can't speak for everybody but the shells and editors that I use support some nifty features for debugging, code folding, syntax highlighting, multiple runspaces, etc. I'm not sure you would want to go through the effort to provide similar capabilities.
One reason to embed PowerShell is to execute the same PowerShell cmdlets as part of your core diagnostics and monitoring engine. That way you don't have to duplicate functionality between your diagnostic tool app engine and the cmdlets that your customers use for automation. It sounds like the code you use to do the diagnostics and monitoring in the app is different than the code in the cmdlets? Or is there common code shared between the app and the cmdlets?
Another reason to embed PowerShell is to allow the app itself to be scriptable but this doesn't appear to fit your scenario.
Another reason to embed PowerShell is if you are implementing a new host - ie you provide some unique editing or shell functionality. Some apps that do this are PowerGUI (which allows you to launch scripts IIRC) and PowerShell Plus.
Yet another reason I have embedded PowerShell in an application is because I knew I could get certain results in much less code than the equivalent C# code. This is a weaker reason and I probably wouldn't do this in a commercial app but I have used this for one-off programs.
I agree with both Jaykul and Keith Hill - the answer is yes.
There are several approaches you could use. But in general, I'd recommend you a) create key cmdlets as part of the UI for your app and b) you build the GUI on top of PowerShell (in the same way the Exchange team has done.
Doing this follows Microsoft's lead (all applications have to have a PowerShell interface) that is also being taken up by others (e.g. VMware, and even Symantec leverage PowerShell in their applications.
Creating cmdlets (and possibly a provider) is pretty straightforward - there's a great cmdlet designer recently released (see http://blogs.msdn.com/powershell/archive/2009/10/16/announcing-open-source-powershell-cmdlet-and-help-designer.aspx) for this tool.
Hope this helps!
Yeah, the main reason I'd consider actually embedding PowerShell in that scenario is if your UI could generate PowerShell scripts for the actions the users take in the UI, so they could see what was happening, and easily learn how to automate it. That would require designing the UI based on PowerShell from the beginning ... so it sounds to me like you're better off just providing the cmdlets and samples ;)
My problem:
There are numerous (>100) tools the development teams use which are "home" written. They are sometimes a perl script, or a "web page", or just something that does a couple of small functions. I need to find a way (as part of my "Middle Manager in charge of tools" job) to collect these into a single catalogue. None of these tools are "productised" in any way.
I need to be able to somehow measure usage of each tool. Uploading or submitting a tool should be a trivial exercise, as should downloading the tool. Must have version management and control.
Is there a technology for centrally storing and publishing these small tools?
Does anyone have experience of such quixotic ventures in other companies?
Supplementary question...
What sort of process checks are appropriate? Do you have a review board for tools going up on the server?
I want to ensure we don't have unintentional consequences from scripts. I also want to ensure that the "Business Critical" set are identified and maintained.
We use a web-accessible front-end to SVN for 'field-developed' scripts, customizations, and small tools.
addition
What I use now is an instance of trac tied to svn on my web server. I don't know if trac handles checking-in code as well - the version I'm running does not.
The front-end I referred to previously was in use where I used to work, and I don't know what it was exactly.
I'll preface this question by saying this is for a Microsoft only shop.
If you were to write a console app to manage a data warehouse what would you use:
1) Writing a custom environment for PowerShell (ala the latest flavors of Exchange / SQL Server)
2) Write it as a C# Console App
If #2 are there any frameworks that offload writing a "menu system" or any other tasks for you.
If this app needed to last 6 to 8 years - would you use PowerShell?
No one on the team currently has PowerShell experience, but we are quick learners.
If you write your management functionality as PowerShell cmdlets, then you can surface that functionality either by letting people run cmdlets directly, or by wrapping them in a GUI. Going with PowerShell probably gives you the most long-term flexibility, and as MS implements more PowerShell cmdlets, it means that managing your data warehouse could be incorporated with other, larger business processes if necessary. I would probably NOT choose to write a C# console app - but I have a distinctly more "administrator" perspective. We admins are tired of having custom console apps tossed at us - the idea of PowerShell is to standardize everything in a way that supports both command-line and GUI administration.
I think you can be successful with both, and should be able to switch between the two without too much hassle. If you start out building a console app but learn PowerShell later, you can throw away a lot of the console app-specific code (command-line parsing code for example) and build a few PowerShell cmdlets to wrap your existing API. Or, if you build a bunch of Cmdlets out the gate, but need to switch to a console app later, you won't have wasted much time writing the Cmdlets.
So, I don't really have strong advice one way or another. I will say: hey, go try PowerShell. If you don't like it, it's not too difficult to switch.
I have found that PowerShell is a much more maintainable solution for smaller things. Console apps require relinking into new libraries and recompiling and redistributing out when the libraries you depend on change (in my case going from Visual Studio 2005 - Visual Studio 2008's codecoverage and other things) as well as executables your scripts may call vsinstr, mstest, etc. where as with PowerShell scripts you can easily customize for each environment you have and don't have to go through the compile, link, deploy stuff for every environment you choose. In fact if you get your path info from the registry the same script can run in both environments.
You can do everything you want with either, I just prefer having to maintain a single simple text file, then a console app. Just personal preference.
When you are saying manage a data warehouse, what kind of tasks are you talking about?
Much of the management I would do in T-SQL (purging, archiving, transforming) - the interface to that can be very thin (even non-existent).
OK, based on your comment I would have the code which does all the work in a stored procs with a .NET assembly (typical API class library), as much in stored procs as possible, with the assembly for stuff which is easier done there or which requires COM or whatever. I would then either wrap the class library in cmdlets or just call the .NET objects from PowerShell (remember that PowerShell can instantiate objects.).
Now you have a .NET library which can also be called from web pages, GUI apps, whatever, if you ever want it, and you have a cmdlet and a direct .NET interface - plus the option of calling them from SQL if they are fully implemented at the SQL layer.
The beauty of implementing PowerShell instead of a cosole app is that you don't have to code all the parameter parsing or all the formatting. PowerShell takes care of all of that for you. You can, of course, override defaults if you need to. You can get wildcards for free. You can also have your Cmdlet(s) take values from the pipeline which opens up all kinds of possibilities and uses for automation. With PowerShell, both the end user and the developer will have a much better experience.