Is it possible to run a batch file from package manager console? - powershell

I'm using code first migrations with my context class in a class library (ie not the startup project) and I want to make batch files for the common operations to save having to pass in the parameters each time I want to add-migration and update-database. I ran the "dir" command in the console and it appears to be in the solution root folder so I have tried creating a .bat,.cmd or .ps1 file in the Solution Items folder but the package manager powershell doesn't seem to be able to find it?

At this very moment I am happening to read this from Bruce Payette's "Powershell in Action" (Wonderful book) so share something with you, lucky guy:
"In this example (Poster: an example in the book), even though hello.ps1 is in the current directory, you had to put ./ in front of it to run it. This is because Powershell doesn't execute commands out of the current directory by default. This prevents accidental execution of the wrong command."

Looks like I needed to just put a ".\" on the beginning of the batch file name - not sure if Powershell requires this to execute?

Related

Change Directory to Folder Containing PowerShell Script - Regardless of Where That Folder Is Located

I have a script that I've created to prep our customer's servers for a software install. Part of this requires the script to be run as administrator, so just instructing people to click "Run With Powershell" doesn't get the job done. The script is in a folder with a number of .ini files that the script needs to copy to different server locations. If I just right-click the Powershell script and select "Run With Powershell," it is able to find the files and copy them without issue. Unfortunately, if I open the script in ISE, it opens with a default directory of C:\users\user, and I can't seem to copy those .ini files without first running a change directory command to get us to the folder that the script and the .ini files are in. But I'd like our installation techs to be able to run this without worrying about the exact location they initially drop these folders. I'd also like them to not have to worry about changing the directory manually in PowerShell. Some of our customers have multiple drives, and it might make sense to put this stuff on something other than the C drive, so it's hard to tell where this folder might end up. But I'm not sure of a command that will get me to the directory of the *.ps1 file, without knowing where that file is beforehand... Anyone have a suggestion?
You can use $PSScriptRoot that will have the location of the directory where the script is located.
This is referenced in the following post:
How can I get the file system location of a PowerShell script?

Powershell dot sourcing opens up file in notepad

Everytime i dot source a file in PowerShell it opens a copy of the file in notepad.
Exe:
.\MyScript.ps1
The script runs fine - its just really annoying having these pop up all the time. Is there a way to suppress this?
I'm on windows 7 x64 and using the latest version of PowerShell.
Ex2: This is still launching notepad.
cls
Set-Location "\\PSCWEBP00129\uploadedFiles\psDashboard\"
. .\assets\DCMPull\Powershell\SqlServerTransfer.psm1
. .\assets\DCMPull\Powershell\RunLogging.psm1
You cannot dot source PowerShell files with the .psm1 file extension. One option is to rename them to .ps1.
Alternatively (and, in my opinion the better approach), you can load the PowerShell modules using Import-Module <module.psm1>. Just note that the behavior of Import-Module is different from dot sourcing it. Dot sourcing runs the script in the current scope and also persists all variables, functions, etc.in the current scope. Import-Module does not do that.
Although not very common, you can also export variables from modules with Export-ModuleMember.
Adding to Raziel's answer, there's a lot of thought that went into only being able to dot source files with .ps1 extension, and otherwise why it tries to run it as a system executable. Here's a snippet from PeterWhittaker on GitHub:
. ./afile would only execute something if there's either an
extension-less but executable aFile in the current dir, or a
(not-required-to-be-executable) afile.ps1 file, with the former taking
precedence if both are present; if the file exists, but is neither
executable nor has extension .ps1, it is opened as if it were a
document.
. <filename> with <filename> being a mere name (no path component) by
(security-minded) design only ever looks for a file of that name in
the directories listed in $env:PATH (see below), not in the current
directory.
I encountered exactly the same situation : If the point source imports the .psm1 file, the file will be opened directly instead of importing the code in the file.
Because the function of point source import is only valid in the file with suffix of.ps1, if the suffix does not meet the requirements, it will not be regarded as path, but as a code , so it is like running the corresponding string directly, and the effect is naturally to open the file.
So,this phenomenon is not aimed at .PSM1,if you change the extension to TXT, it will have the same effect. It will have the same effect for any file whose suffix is not .PS1.
You can bypass this problem by creating symbolic links or hard links!
In PowerShell 7, it's easy to create links using New-Item.

How to use Pentaho Spoon to rename files that do not have an extension

I am new to using Pentaho Spoon. I have about 100 text files in a folder, none of which have file extensions. I have found that if I create a job and move a file, one at a time, that I can simply rename that file, adding a .txt extension to the end. What I'd like to do is create a job that goes through and renames each file and adds the .txt extension. I've tried using the regex, but can't seem to get it to work because there's no file extension.
Any help would be greatly appreciated.
It's a pretty straightforward solution but you need to use a Transformation, as Job steps won't do it, ok?
You need the following steps:
Get File Names: just add your folder and the RegExp ".*" (without the double quotes), so everything is listed. Check if it's ok with "Show filename(s)..." button.
Modified Java Script Value: declare a new_filename var concatenating the desired extension. Remember to click "Get Variables" after adding the script to output the new field.
var new_filename = filename + '.txt';
Process Files: select Operation = Move and filename/new_filename as your source/target filenames.
That's it!
Renaming a group of files is one thing I wouldn't use Kettle for. Why not let the shell do what the shell does best?
rem example for Windows CMD shell
ren absolute-path-to-folder\*. *.txt
This can be done using a Shell job entry, if you find reason to do it in Kettle at all.
I've seen "just use a shell script" answers for this before. Works great if you can guarantee you're Kettle server is on the same OS as the developer workstation. I'm in an environment where the Dev/Spoon instance is Windows, but the Prod/Kettle environment is Linux, so you can't write one script file to rule them all.
As for "Why on earth would you do this?", my scenario is an integration scenario. We're using Pentaho for Data Integration, but a different tool for Enterprise Integration. I want a Pentaho Job to produce an output file, and I want my Enterprise Integration tool to pick up the file and do something with it, but not before Pentaho is done writing the file. Renaming helps avoid a race condition when the Enterprise Integration solution recognizes the file is there, but Pentaho isn't done writing it yet.
If I could rename a set of files, for example change from test..csv.processing to test..csv, then Pentaho would create the file initially with the .processing extension, and then remove the extension once it's done. The Enterprise Integration solution that's looking for test.*.csv won't start processing the file until Pentaho renames it. Bingo, no race condition.

Is it possible to save settings and load resources when compiling to just one standalone exe?

If I compile a script for distribution as a standalone exe, is there any way I can store settings within the exe itself, to save having to write to an external file? The main incentive for this is to save having to develop an installation process. I only need to store a few bytes.
Also, can resources such as images be compiled into the exe?
Using alternate data streams opens up a can of worms so i wouldn't go that way. Writing back config data into the exe itself won't work as the file is locked for write access during execution.
What i usually do is to store config data under %A_AppData%\%A_ScriptName%\%A_ScriptName%.ini
When the script starts i use IniRead which also provides a default value if the key isn't found - which is the case the script is executing for the first time.
The complementing IniWrite's in a OnExit subroutine/function will create the ini file if necessary.
This way no installation is needed and the config is stored in the proper, familiar place.
The autohotkey forum has dealt with this question before.
In that case, the user didn't want extra files -- period.
The method was to use the file system to save alternate data.
Unfortunately I can't find the post.
A simpler method is to use fileinstall command.
When the script is compiled, the external file is stored within the exe.
When the script executes the same command as an exe, the file is copied to the same
directory as the running script. It is a simple yet effective 'install'.
With a little testing for the config file, the fileinstall command can be skipped.
Skipping the fileinstall could allow changes to be made to the configuration after 'installation'
I have not tried saving settings within the compiled exe file, but I have included resources. I'm not sure which version of AHK you're using or how you are compiling, but I can right-click my scripts to compile. There's an option to compile with options, where you can include resources in your compiled exe.Compile with options

Using the nupack Package Manager Console to set working folder to solution folder

In Visual Studio, nupack adds a power-shell window called the Package Manager Console. I am thinking that this would be a good place to run source control commands (I'm using Mercurial). However, the default working directory is my users folder, so I need to navigate to my code folder every time I load a new project.
I am wondering if there is a one-line command to set the working directory to the solution folder. e.g. does something like this exist?
cd $SolutionFolder
From the results of get-variable it doens't look like there is anything immediately available, but I've never used powershell before, so maybe there is a way of getting the solution folder?
Thanks to Doug for pointing me in the right direction. I've written up full instructions on my blog here:
http://mark-dot-net.blogspot.com/2010/10/change-to-solution-folder-in-package.html
The basic answer is that the following command will do it:
Split-Path -parent $dte.Solution.FileName | cd
To make it more readily available, you need to create a function in your "user profile" script file, the location of which is found in the $profile variable. You will need to create the file if it doesn't exist. Then add a function:
Function solutionFolder()
{
Split-Path -parent $dte.Solution.FileName | cd
}
Now, after loading a solution in VS2010, you can simply type:
solutionFolder
and the working folder will be changed.
Try
$dte.Solution.FileName
I'm not sure when it changed, but the Package Manager Console automatically shifts the working directory to the current solution folder when you open an application now.