I'm developing some scripts that I later pack into EXE using PS2EXE to ship to end user. The key point here is that they only know how to double click executable and they want to do just that. The script then builds simple GUI using WPF where end user can click buttons and check boxes and do whatever they need to do.
Now, as I have several similar scripts and total volume of code grows bigger, I would like to reuse some of it packing various functions into modules and just use Import-Module in my script - standard practice.
However, I realized, that when I use PS2EXE in this case, it does not pack modules in the EXE file. It still works but it requires model to be deployed on the user's machine. Which immediately makes it over-complicated for the end user.
My question - is there a way to develop script re-using code through modules and still pack it into EXE files (along with all the imported modules) for end users, so that the single EXE file is everything they need?
Set command in the EXE file that you want to reuse that return the required function declaration statements.
other scripts that want to use these functions, obtaining these statements by executing the command,and execute them through Invoke-Expression Cmdlet.
Related
It seems like it is not all that easy to create an executable from a PowerShell script - do you know if it was ever meant to be an option?
I have found tools like PS2EXE, but still it does not seem like it was meant to be.
I am asking, if it is worth it to go the extra mile or leave it.
Background reason: I have some less technical users that need a smoother workflow.
As #Bill_Stewart kindly noted:
PowerShell is a shell that contains a powerful scripting language.
Which is perfectly in line with Microsoft's definition :
PowerShell is a task-based command-line shell and scripting language built on .NET.
Basically, wrapping it in an executable would go beyond its purpose. The nice thing about scripting is, that it's lightweight and task based. And in this case, that you can easily run it on multiple OS-ses.
So, I wouldn't go through the effort creating an exe.
If you want to run it easily, just create a .bat or .cmd file (if using Windows). On windows, I believe, you can also create a shortcut with command arguments and a little icon.
For Linux you clould use a .sh.
I've written a simple script that has multiple custom functions stored as modules. I have done it this way because I was always been told that if your function can be reused by other things then it should be a module and not a .\ source include. I'm starting to think that mantra isn't right in my current scenario. I am trying to convert the script to an single .exe so that I can install it as a windows service.
Probably should acknowledge that I understand why you wouldn't want to include system modules like Active Directory or IIS management for the obvious issue that could lead to but I'm only trying to include custom functions in a single disputable non editable way.
I have used PowerGUI in the past but can't find any valid exe's for that since DELL have removed it and from memory, I don't think I've ever used it with a module.
I've tried PS2EXE-GUI and PS2EXE. Both of these make the exe and everything works fine while the modules exist. However, as soon as I put the exe on a server that hasn't got the Modules deployed to it, it fails to run. I thought the compile followed all the dependencies and included them as part of the build into the single exe? That appears to not be the case.
I've also tried the PowerShell Studio 2018 by Sapien, but based on their forums you can't include modules into the complied exe. Which again feels wrong if they are actually just custom functions, but it's the way they've written it.
I see https://poshtools.com/docs/posh-pro-tools/merge-script/ would possibly do what I need but that's chargeable and it looks like it actually merges all the content back into a single file. Given the time pressure I'm starting to think I'll have to pay if there are really no other better options. I just don't have time to join everything together manually and I can't help thinking there is a better way I'm missing!
Can anybody please suggest other options?
Could I also get clarification around my original mantra (functions go in modules...)?
"No, never!" or "Yes, always!" or "It's just wrong in this scenario."
guys,
I just finished a particular code in MATLAB R2014a that reads and write into multiple text files and saves an image inside the same folder of the script. The script runs perfectly, but the compilation executable does not, so I believe that it has something to do with the PATH that the executable is trying to use to run, I don't really know.
The error was the folllowing:
That's the second read function in the code that tries to read a file and it's possible to see that the code was already successful doing a read/write operation, since a .txt is created.
Just to keep it simple, I didn't use any global paths to the files and tried to keep them inside of the script and executable folder.
I don't have a lot of experience compiling stuff, so I just used deploytool and hit run to test it, so I would love to hear some insights about the possible cause of the problem.
Thank you in advance for the help!
MATLAB doesn't include every file on your PATH when it compiles. It tries to detect additional files that may be accessed when running the code in your application's main file, and include those in the compilation, but it isn't always 100% successful (I'm not sure exactly what conditions it's unable to detect).
After you have run the deploytool once, the full list of files it has detected in this way will be listed under Files required for your application to run. You can add files to this list (whether or not your project has already been compiled) using the "+" icon in the corner of that section.
I am trying to updating a software that is company wide. When the update is applied to the server, the client machines recognize they need an update and ask if you wish to update or not. To update, the user would need to run as admin, which is'nt an option in this case.
We would like to automate this process using powershell, using the Invoke-Command feature. For the most part, the only thing that the update does is copy new files to the programs folder, which we have achieved with robocopy. However, there is one registry key that needs to be added in multiple locations. There is a setup file that does this, but requires a user (with admin privileges) click a couple buttons, and we want this to be completely automated.
So I guess the short version of my question is, what is the best way to handle the registry changes that setup.exe does? It would be nice if there was a way to invoke the script that the executable does.
As for my question, I solved the problem with a slightly diferent approach. (One that should have been tried initially)
When (ProgramName).exe is run, if it sees that it needs updated, it runs a program called (ProgramName).setup.exe with the parameters :
Client="Local folder" server="server location"
These parameters did NOT work from the command line, however, and so I ended up using a powershell script to make a scheduled task that ran (ProgramName).setup.exe with said parameters.
Another huge advantage to this was the fact that I could create an icon that allowed a regular user to run the scheduled task with admin privileges. I couldn't setup a shortcut directly, however, I wrote an AUTO-it Executable that would run the task as admin.
I hope someone can get some level of help out of this post!
When I click the Windows button, I can type the name of an app and get completions in the menu to quickly find an app and run it.
But it looks like with PowerShell I must add the app's path to $ENV:Path which is very cumbersome to do per app. Is there a better way?
Kind of, but not really. PowerShell's tab completion function searches the current folder, known cmdlet names, and folders in the user and system path. Unless you want to rewrite the tab completion function yourself (which is possible, see PowerTab) you're stuck with that functionality.
You should be able to use Start-Process (alias start and saps), which does use some file association information similar to the Start Menu. However, you need to know the name of the application executable, not the friendly name you'd see on the Start Menu. You can't type Start-Process word. You'd have to say Start-Process winword, because the executable is winword.exe.
PowerShell is a modern command line, but it's still a command line. It doesn't help the user the same way the Windows search does because it doesn't make the same assumptions. PowerShell assumes you'll predominantly want to work with cmdlets, and that's really how most people use it, IMX. Unless you're managing servers, it's not meant to replace the GUI.
You can use tab completion with the folder names and the call operator, so you could type C:\Pro and hit tab and it will expand to (for example) & 'C:\Program Files\'. Note that it includes the call operator (&) for you here automatically, since the provider for file systems assumes you'll want it.
A trick I use is to create a batch file for things I use a lot from the command line and put that in a c:\tools folder, which is in my system path.
That way you don't have to add every executable to your path so it can find it with autocomplete.