Can someone please shed some light on different possibilities of build automation for, PB 12.NET applications. Since PB is .NET, can tools like NAnt or CruiseControl (with MSBuild) can be used to build and deploy the applications.
Basically, Can it be ORCAScript / ECrane PowerGEN independent.
I'm usually wary of "never say never", but I'm pretty sure your answer is no.
To call the PowerBuilder compiler, you need to call PB's ORCA API. The IDE does that. ORCAScript provides you a command line interface to that. PowerGen provides a GUI front end (and lots of additional functionality) to that.
However, I'd expect you could use ORCAScript or PowerGen from these tools. For example, here's a blog post describing leveraging ORACScript in CruiseControl with NAnt. PowerGen has a very robust set of command line options, and will give you more power and opportunities from the command line (e.g. PBL optimization, more efficient bootstrapping).
Good luck,
Terry.
Related
I've built a set of tools I use in my day-to-day work and I would like to make them look a bit more "professional" in order to sell them to financial institutions.
At the moment these tools are written in Perl and are executed from a DOS command line, it's extremely efficient but it doesn't look very attractive.
So, I would like to add a user interface to it but I don't really know what to use for language knowning that :
A Perl CGI interface hosted on the web is not an option since the information to be given as input is quite sensitive.
It would be ideal to sell it as a package/executable.
I don't really like the Perl/Tk interface.
I'm ok with rewritting the application in another language but I would prefere to reuse the main modules in Perl since it's very powerful with regular expressions and lists/arrays.
What would you advise me to do ?
Thanks,
Lory
If you want a non-web-based GUI, and don't like Tk, there's also Wx, which is a wrapper for the wxWidgets GUI toolkit.
However, web applications nowadays can be really easy to create (using a modern framework). Take a Mojolicious application, for example: Mojolicious carries no dependencies other than Perl 5.12.x, and provides its own web server (Hypnotoad). You can start by generating a "Lite::App", which is a simple self-contained single-file application, and then grow it to a bigger distribution later on as the need arises. It even comes with tools to convert your application to a conveniently packaged distribution that can be installed as easily as any CPAN module.
So that leaves the issue of security. User authentication, IP whitelisting, local network only... there are many ways to make a web application "for internal use only" if that's what you need.
You might just throw together a web-application prototype, and once you determine customer interest in your product, invest the substantial time in writing it as a Windows GUI application.
Continuing on from DavidO's answer, because the current web microframeworks for Perl (I prefer Dancer over his suggestion of Mojolicious, but both are good and largely equivalent) contain their own bundled web mini-servers, they also allow for the app to easily run entirely on the local machine.
Since these mini-servers default to a non-standard port (usually 3000 or 5000) by default and you can easily set them to a different port, they can be isolated by firewalls relatively easily, ensuring that nobody can connect to them remotely. You can also, of course, include and IP address check in the app and have it reject any requests that don't originate from localhost.
My guess is that the target system will be Windows. Use a RAD (Rapid Application Development) platform to develop a GUI. Examples for such a platform are Delphi or .NET with C# or VB. For bundling the Perl part, consider using a tool called perl2exe.
It doesn't sound like your scripts should require a web server. Also, consider the installation hassle. Only guessing as you're not giving much information about what your scripts are doing.
I am using the Cava packager to deploy my Perl-written tools. You can even generate an installer executable with just some mouse-clicks. It works pretty well with strawberry Perl and wxPerl on Windows.
I have a complex NAnt build script, which contains a lot of *.build and *.include files with many targets inside, which in their turn are called both via depends and via call. I'd like to have a visual representation in a tree-like form of what calls what. It should also be an easy way to regenerate it because the script is growing further.
Is there any ready-made tool or some API (preferably .NET-based) I can use for this purpose?
There's NAntBuilder although it seems to be expensive (with a free trial). I've never used it personally so I couldn't recommend it either way.
I've not found one, but my general mantra is "get your designer out of my face". Imagine a database diagram in Sql Management Studio or in the EF Design Surface if you've got 30 or 50 tables. Generally my mental map is better organized.
Probably the best way to initially visualize the dependencies is to run the build and watch the task names appear in the output.
Is Powershell a mature enough tech for enterprise to be using?
Are its many benefits worth the time and effort to convert existing VBS scripts, or would you only use it for new scripting projects?
We are currently using a mixture of vbs and batch files, with a login script as opposed to alot of GPO.
We don't have a huge number of .NET programmers, whereas just about everyone at least knows a bit of VB.
“Mature enough” is slightly subjective and depends on what you mean exactly.
1) Is it powerful enough to get the job done? -- Yes. But it is not the best tool for all the jobs; think, choose.
2) Is it bugs/issues free? -- No. To be prepared and informed you may want to take a look at some most voted bugs/issues in here:
https://connect.microsoft.com/PowerShell/Feedback
3) Is it easy to learn? -- It depends; basically I think it is not easy for not professional programmers. But it is definitely possible, step by step, having fun:
http://blogs.msdn.com/b/powershell/archive/2010/03/09/falling-is-learning-just-focus-on-having-fun.aspx
It’s a good idea to start using PowerShell for new tools. As for the old tools (e.g. VB), I would not convert them into PowerShell unless there are good reasons. In most cases they can be perfectly called from new PowerShell tools.
Yes. PowerShell brings into one environment the power of .NET, COM, WMI, and more. I use it every day to administer a family of 30-or-more servers, and it has proved to be both stable and productive.
I think PowerShell is a powerful thing. Microsoft is going into the "administer everything from powershell" direction (MS Exchange management shell, SharePoint 2010 management shell). This makes me think that this scripting technology will not die soon. Another thing - since the administration scripts are written in PowerShell, you can learn a lot from them, and hence gain more automation of your administrative tasks.
Convert only when needed
Prefer PowerShell for new work
It is enterprise ready and there are gotchas as with anything
Doing work in PowerShell now improves your skill
Doing work in PowerShell now positions for vNext products enabled with PowerShell. Big win.
For learning purposes i'm developing a Class generation application in c# and winforms. I think It could be fine to include a command-line mode that allow to use the application in scripts.
It's a good practice to include a command-line mode in my applications? It would be better to have two different programs, one with GUI in one for the command-line?
Actually having a C# application be both console and GUI is problematic. Console applications (/t:exe) are launched and then the command prompt waits for them to finish. GUI applications (/t:winexe) the command shell launches them and then returns immediately. While you can create and run forms from a 'console' application, it will always have a background console displayed. On the other hand 'Forms' application don't have the stdin, stdout and stderr connected and, while they can behave as command line tools and process command arguments, they have problems when embedded in scripts (because the standard input/output is not hooked up).
If you want to expose the functionality from both GUI driven applications and scriptable/pipe-able batch processing too the best way is to compile your functionality into a class library, then built two separate applications (one GUI one console) that leverage that library.
I'm not a C# programmer, but when I program in C++, I find it most useful to:
1.) Create both a shared library with a C as well as C++ API for performing core app functionality.
2.) Create one or more commandline binaries accessible to the shell interpreter.
3.) Create a GUI application for typical end users, implemented with the library (not by invoking the binaries).
This separates the logic of the application from the interface to the application, and enables thirdparty developers to create alternative interfaces for the same application functionality. It also makes it easy to script, while at the same time catering to typical end users who want a nice, shiny GUI.
Yes. If you think the program will be useful in a scripted environment then include a command line mode (without UI) so it can be used in scripts.
It doesn't have to be a separate application, but it can be. Whether you want to do that or not is entirely up to you. I'd imagine that if you had two applications they'd share the same logic assemblies but the interface (one a GUI the other a command line) would just be different.
I agree with michaelsafyan about creating a library with core functionality.
What I would add is that you should check out powershell cmdlets as well.
Much command line activity will be migrating to powershell and it brings a lot to the table.
http://en.wikipedia.org/wiki/Windows_PowerShell
I very often create such a utility as an API. If I need to use it from a simple command-line utility, that's easy - it just calls the API. If the command-line gets too complex, maybe it's time for a Winforms application - which can also call the API. If I wanted to use it from PowerShell, or from an MSBUILD task, those are still easy - they just call the API.
Creating an application on the windows platform that behaves correctly as a console application can be problematic it's an issue with the windows kernel architecture as they're considered two different types of application (they have a different subsystem that you generally specify in the compiler or linker options). You can still manually redirect the IO and open a console from a win32 application by the win32 function AllocConsole() and friends but this also has some issues. See This Old New Thing post for more information.
If you want your utility/prgram run in scripts you can expose it as COM.
Many script languages for windows had the hability to use COM objects directly.
You should include a command line interface in your application,
if it enhances usability and comfort.
For instance, calling a CLI command might be faster then starting the GUI, navigating through several menu layers to reach the same functionality.
You might ask the users of your application, if they would find it useful to have a CLI mode.
Some words on marrying CLI & GUI on Windows:
A windows application is either a GUI application or a Console application, but not both. This is an OS issue and there is probably nothing one can do about it.
The console subsystem in Windows is horrible and PowerShell didn't change that.
Your implementation options on Windows are:
the two files approach:
Provide two files: one .com with console, one .exe with GUI.
Because of the executable probing on the command line, the com file will get executed before the exe.
the console flickering approach:
Compile your GUI application with console mode on, then immediately after the start of the GUI you might call FreeConsole() to close it.
It's a bit annoying, but works. Bad: now you have a flickering console window. Pro: still one file.
I agree with #Remus Rusanu.
you should create a class library of your core functionality and then build GUI app(wrapper) for that.
and one other benefit of it is you might not even need to create a command line app as you can access your .net dll features using powershell..
you can find one example over here
Another great idea is to embed a scripting language. Then your program can be controlled by a script, and you get all the logic, branching, etc from the scripting language "for free."
There are many choices of what you can embed. Lua is one of the most popular and intended for just that purpose and is an excellent choice.
However, for a general purpose app, I'd take a hard look at embedding Python. Python is so popular, you'd have a larger group of people willing to take the effort to write a script for your app.
I package our server releases into zip files using a batch file (Windows), running the command-line version of WinZip. Previously we did this sort of thing "by hand" but I developed the process of automating it with a batch file.
The batch file has become quite complicated because our product is complicated (i.e., Which sections are we releasing this time? Are we releasing the config files as well?) and I'm starting to run into some frustrating limitations with batch files.
Would PowerShell be a good thing to investigate as an "upgrade" to the batch file? Or is that complete overkill given that most of what it would be doing is firing off DOS commands?
Bonus: can PowerShell consume .NET assemblies? As in, could I start doing the zipping with SharpZip?
If you have a working solution, then you don't need to go to powershell. Having said that, if you plan to make changes or improve the process then I would highly recommend powershell as the way to go. Powershell can access .Net assemblies...mostly. Some assemblies are structured in a way that makes it more difficult than others.
You can check here for some resources if you decide to look at powershell.
Initially I was really excited about PowerShell. Finally a powerful native shell on Windows. However, I quickly realized that compared to your favorite unix shell PowerShell is just way too verbose. Even doing simple stuff takes way too much typing compared to what you can do with bash and GNU tools for Win32.
I like the idea, that the shell knows about different types, but if I need to do that much additional work, I prefer just getting the necessary data with the various unix stream editors.
EDIT: I just had another look at PowerShell, and I have to admit, that it does have some really useful features that are not available for the traditional unix style tools.
For one the PowerShell owns all the commands which means that it can provide a much more coherent set of features. Parameters are treated uniformly, you can search for commands, parameters and so forth using wild cards which is really useful.
The second great feature is that PowerShell lets you enumerate sources which are normally not available to stream editors such as the Windows registry, the certificate store and so forth. Of course you can have tools that does this for you and present it as text, but the PowerShell approach is just really elegant IMO.
Take a look at PowerShell Community Extensions (PSCX), its FREE and it has Zip cmdlets:
Write-Zip
Write-BZip2
Write-GZip
http://www.codeplex.com/PowerShellCX
You should watch this presentation/discussion with Jeffrey Snover, PowerShell creator and architect. If you're not amazed by the technical details (lots of "wow" moments to be had), you'll be amazed by Jeffrey's enthusiasm :). Once you get the basics, it's easy to be very productive with PowerShell.
The answer is YES - PowerShell can use .NET assemblies. There is a bit of funny business involved in v1 if you need to wire up delegates and v2 makes that much more clean.
Just call LoadFile / LoadAssembly to get the appropriate libraries in memory and away you go
[Reflection.Assembly]::LoadFile('/path/to/sharpzip.dll')
$zip = new-object ICSharpCode.SharpZipLib.Zip.FastZip
$zip.CreateZip('C:\Sample.zip', 'C:\BuildFiles\', 'true', '^au')
# note - I didn't actually test this code
# I don't have SharpZip downloaded - just read their reference.
Also note that the PowerShell Community Extensions support various compression methods like write-zip.
I've tried to replace one of the lengthy build batch files I use with power shell. I found it a pain: at least at that time, documentation focused on the funny verbiage and what cool, perly things you can do with it, but lacked in the "getting simple things done" category. I got it working, but the error handling was to shaky.
YMMV, try powershell, you might enjoy it. But try it before updating your build batches.
My solution: use a C# console application. I've got serious logging, exception handling, can use my utility functions, and if something doesn't work I have a real debugger. It's the first solution I like to modify.
I'm not sure about powershell, but might I recommend using something like IronPython (if you want to have access to the .NET libraries) or plain python? You get a full-blown programming language with very few limitations.
On the one hand, if it works, just leave it. But it sounds like this is something you'll be adding to over time, and of course your eventual successor/coworker who needs to edit the batch file will also need to understand it. If you're from a programming background then you may well find the power of Powershell makes your script a lot shorter and easier to read/maintain (for example, even just having full if statements and for/while loops). On the other hand if you're not overly familiar with programming, a lot of people find Powershell a bit daunting at first glance.
Regarding the .NET part, Powershell is built on top of .NET so yes, you can access .NET assemblies (but you should always see if there's a cmdlet available first).
I would recommend a book called "The Powershell Cookbook" by Lee Holmes, published by O'Reilly. It provides "recipes" which you can use for common tasks; this will probably speed up your time to implement the script, and it'll teach you Powershell along the way.