SQLKata with SQLite minimal example (Powershell) - powershell

I have a sqlite database say c:\myDb.sqlite
I have figured out how to build a query to this db in SQLKata:
$query = New-Object SqlKata.Query("myTable")
$compiler = New-Object SqlKata.Compilers.SqliteCompiler
$query.Where("myColumn", "1")
$result = $compiler.Compile($query)
But I have no clue at all how to submit this to my Sqlite database.
Can anyone help?
Thanks,
Alex

Getting this to work from PowerShell is hampered by two difficulties:
Loading the assemblies related to NuGet packages in general and the Microsoft.Data.Sqlite NuGet package in particular often requires extra, non-obvious work in PowerShell.
PowerShell generally doesn't surface extension methods as such - e.g. .Get() on query instances - necessitating explicit calls to the static methods of [SqlKata.Execution.QueryExtensions] instead.
Specifically, using NuGet packages from PowerShell requires the following steps, which are neither convenient nor obvious:
Merely installing NuGet packages with Install-Package or trying to use them from the local cache created by .NET SDK projects in $HOME/.nuget/packages is often not enough, because any assemblies they depend on aren't then present in the same directory, which is what Add-Type requires.
They must also be unpacked in a platform-appropriate manner via an auxiliary .NET SDK project to a single target folder (per package or combined), as outlined in this answer.
Additionally, for the Microsoft.Data.Sqlite package, the platform-appropriate native library (e.g., win-x64\native\*.dll from the "runtimes" folder subtree of the .NET SDK project's publish folder) must be copied directly to the target folder in PowerShell (Core), but curiously not in Windows PowerShell, at least as of package version 5.0.9
The following sample code uses the Add-NuGetType helper function, available from this MIT-licensed Gist, which automates all of the steps above:
Note:
Assuming you have looked at the linked code to ensure that it is safe (which I can personally assure you of, but you should always check), you can install Add-NuGetType directly as follows (instructions for how to make the function available in future sessions or to convert it to a script will be displayed):
irm https://gist.github.com/mklement0/7436c9e4b2f73d7256498f959f0d5a7c/raw/Add-NuGetType.ps1 | iex
When first run, the function downloads and installs a private copy of the .NET SDK embedded inside the folder in which NuGet packages downloaded later are cached. This initial installation takes a while, and the -Verbose switch used below reports its progress.
Add-NuGetType is not meant for production use, but for experimentation with NuGet packages; run help Add-NuGetType for more information.
# Reference the relevant namespaces.
using namespace SqlKata
using namespace SqlKata.Compilers
using namespace SqlKata.Execution
using namespace Microsoft.Data.Sqlite
# Load the SqlKata and Sqlite asssemblies.
# See the comments above for how to install the Add-NuGetType function.
# Note: On first call, a private copy of the .NET SDK is downloaded
# on demand, which takes a while.
Add-NuGetType -Verbose SqlKata, SqlKata.Execution, Microsoft.Data.Sqlite
# First, create sample database './sample.db' with table 'sample_table'
#'
create table sample_table (Name string, Age int);
insert into sample_table (Name, Age) values ("JDoe", 42), ("JRoe", 43);
.save ./sample.db
'# | sqlite3
# Create a [SqliteConnection] instance...
$connection = [SqliteConnection]::new("Data Source=$PWD/sample.db")
# ... and create a query factory for it.
$sqliteDb = [QueryFactory]::new($connection, [SqlServerCompiler]::new())
# Create and execute a sample query.
$query = $sqliteDb.Query("sample_table").Where("Name", "JRoe")
# Note the need to use the static methods of [SqlKata.Execution.QueryExtensions],
# because PowerShell doesn't make *extension methods* automatically available.
[SqlKata.Execution.QueryExtensions]::Get($query) # outputs [Dapper.SqlMapper+DapperRow] instances

Related

Powershell Core + Pester - Separating tests from src

Question:
What would be the best way to import functions to tests that don't reside in the same directory?
Example
📁 src
📄 Get-Emoji.ps1
📁 test
📄 Get-Emoji.Tests.ps1
Inb4
Pester documentation[1] suggests test files are placed in the same directory as the code that they test. No examples of alternatives provided.
Pester documentation[2] suggests dot-sourcing to import files. Only with examples from within same directory
Whether breaking out tests from the src is good practice, is to be discussed elsewhere
Using Powershell Core for cross platform support on different os filesystems (forward- vs backward slash)
[1] File placement and naming convention
Pester considers all files named .Tests.ps1 to be test files. This is the default naming convention that is used by almost all projects.
Test files are placed in the same directory as the code that they test. Each file is called as the function it tests. This means that for a function Get-Emoji we would have Get-Emoji.Tests.ps1 and Get-Emoji.ps1 in the same directory. What would be the best way to referencing tests to functions in Pester.
[2] Importing the tested functions
Pester tests are placed in .Tests.ps1 file, for example Get-Emoji.Tests.ps1. The code is placed in Get-Emoji.ps1.
To make the tested code available to the test we need to import the code file. This is done by dot-sourcing the file into the current scope like this:
Example 1
# at the top of Get-Emoji.Tests.ps1
BeforeAll {
. $PSScriptRoot/Get-Emoji.ps1
}
Example 2
# at the top of Get-Emoji.Tests.ps1
BeforeAll {
. $PSCommandPath.Replace('.Tests.ps1','.ps1')
}
I tend to keep my tests together in a single folder that is one or two parent levels away from where the script is (which is usually under a named module directory and then within a folder named either Private or Public). I just dot source my script or module and use .. to reference the parent path with $PSScriptRoot (the current scripts path) as a point of reference. For example:
Script in \SomeModule\Public\get-something.ps1
Tests in \Tests\get-something.tests.ps1
BeforeAll {
. $PSScriptRoot\..\SomeModule\Public\get-something.ps1
}
Use forward slashes if cross platform compatibility is a concern, Windows doesn't mind if path separators are forward or backslashes. You could also run this path through Resolve-Path first if you wanted to be certain a valid full path is used, but I don't generally find that necessary.

What are my PowerShell options for determining a path that changes with every build (TeamCity)?

I'm on a project that uses TeamCity for builds.
I have a VM, and have written a PowerShell script that backs up a few files, opens a ZIP artifact that I manually download from TeamCity, and then copies it to my VM.
I'd like to enhance my script by having it retrieve the ZIP artifact (which always has the same name).
The problem is that the download path contains the build number which is always changing. Aside from requesting the download path for the ZIP artifact, I don't really care what it is.
An example artifact path might be:
http://{server}/repository/download/{project}/{build_number}:id/{project}.zip
There is a "Last Successful Build" page in TeamCity that I might be able to obtain the build number from.
What do you think the best way to approach this issue is?
I'm new to TeamCity, but it could also be that the answer is "TeamCity does this - you don't need a PowerShell script." So direction in that regard would be helpful.
At the moment, my PowerShell script does the trick and only takes about 30 seconds to run (which is much faster than my peers that do all of the file copying manually). I'd be happy with just automating the ZIP download so I can "fire and forget" my script and end up with an updated VM.
Seems like the smallest knowledge gap to fill and retrieving changing path info at run-time with PowerShell seems like a pretty decent skill to have.
I might just use C# within PS to collect this info, but I was hoping for a more PS way to do it.
Thanks in advance for your thoughts and advice!
Update: It turns out some other teams had been using Octopus Deploy (https://octopus.com/) for this sort of thing so I'm using that for now - though it actually seems more cumbersome than the PS solution overall since it involves logging into the Octopus server and going through a few steps to kick off a new build manually at this point.
I'm also waiting for the TC administrator to provide a Webhook or something to notify Octopus when a new build is available. Once I have that, the Octopus admin says we should be able to get the deployments to happen automagically.
On the bright side, I do have the build process integrated with Microsoft Teams via a webhook plugin that was available for Octopus. Also, the Developer of Octopus is looking at making a Microsoft Teams connector to simplify this. It's nice to get a notification that the new build is available right in my team chat.
You can try to get your artefact from this url:
http://<ServerUrl>/repository/downloadAll/<BuildId>/.lastSuccessful
Where BuildId is the unique identifier of the build configuration.
My implementation of this question is, in powershell:
#
# GetArtefact.ps1
#
Param(
[Parameter(Mandatory=$false)][string]$TeamcityServer="",
[Parameter(Mandatory=$false)][string]$BuildConfigurationId="",
[Parameter(Mandatory=$false)][string]$LocalPathToSave=""
)
Begin
{
$username = "guest";
$password = "guest";
function Execute-HTTPGetCommand() {
param(
[string] $target = $null
)
$request = [System.Net.WebRequest]::Create($target)
$request.PreAuthenticate = $true
$request.Method = "GET"
$request.Headers.Add("AUTHORIZATION", "Basic");
$request.Accept = "*"
$request.Credentials = New-Object System.Net.NetworkCredential($username, $password)
$response = $request.GetResponse()
$sr = [Io.StreamReader]($response.GetResponseStream())
$file = $sr.ReadToEnd()
return $file;
}
Execute-HTTPGetCommand http://$TeamcityServer/repository/downloadAll/$BuildConfigurationId/.lastSuccessful | Out-File $LocalPathToSave
}
And call this with the appropriate parameters.
EDIT: Note that the current credential I used here was the guest account. You should check if the guest account has the permissions to do this, or specify the appropriate account.
Try constructing the URL to download build artifact using TeamCity REST API.
You can get a permanent link using a wide range of criteria like last successful build or last tagged with a specific tag, etc.
e.g. to get last successful you can use something like:
http://{server}/app/rest/builds/buildType:(id:{build.conf.id}),status:SUCCESS/artifacts/content/{file.name}
TeamCity has the capability to publish its artifacts to a built in NuGet feed. You can then use NuGet to install the created package, not caring about where the artifacts are. Once you do that, you can install with nuget.exe by pointing your source to the NuGet feed URL. Read about how to configure the feed at https://confluence.jetbrains.com/display/TCD10/NuGet.
Read the file content of the path in TEAMCITY_BUILD_PROPERTIES_FILE environment variable.
Locate the teamcity.configuration.properties.file row in the file, iirc the value is backslash encoded.
Read THAT file, and locate the teamcity.serverUrl value, decode it.
Construct the url like this:
{serverurl}/httpAuth/repository/download/{buildtypeid}/.lastSuccessful/file.txt
Here's an example (C#):
https://github.com/WideOrbit/buildtools/blob/master/RunTests.csx#L272

How to pass a parameter to Chef recipe from external source

I'm new to Chef and seeking help here. I'm looking into using Chef to deploy our builds to Chef node servers (Windows Server 2012 machines). I have a cookbook called copy_builds that goes out to a central repository and selects the build we want to deploy and copies it out to the node server. The recipe I have contains basic steps that perform the copy steps, and this recipe could be used for all builds we want to deploy except for one thing: the build name.
Here is an example of the recipe:
powershell_script 'Copy build files' do
code '
$Project = "Dev3_SomeCoolBuild"
net use "\\\\server\\build_share\\drop\\$Project"
$BuildNum = GC "\\\\server\\build_share\\drop\\$Project\\buildlabel.txt"
robocopy \\\\server\\build_share\\drop\\$Project\\bin W:\\binroot\\$BuildNum'
end
As you can see, the variable $Project contains the name of the build in this recipe. If we have 100 different builds, all with different names, then what is the best way to handle this without creating 100 different recipes for my copy_builds cookbook?
BTW: this is how I'm currently calling Chef to deploy, which is in a PowerShell script that's external to Chef:
knife node run_list set $Node "recipe[copy_builds::$ProjectName],recipe[install_build]"
This command (from the external PowerShell script) contains the project/build name info within it's own $ProjectName variable. In this case $ProjectName contains the value of 'Dev3_SomeCoolBuild', to reference the recipe Dev3_SomeCoolBuild.rb.
What I'd like is have just one default recipe under copy_builds cookbook, and pass in the build/project name. Is this possible? And what is the best way to do it? I've read about data bags, attributes, and providers, but not sure if they would work for what I want.
Please advise.
Thanks,
Keith
The best approach for you is likely to use a single recipe that gets a list of projects to deploy from a databag or node attributes (or both). So basically take what you have now and put it in a loop, and then use either roles to set node attributes or put the project mapping into a databag item.
I ended up using attributes here to solve my problem. I updated my script to write the build name to the attributes/default.rb file for the copy_builds recipe and upload the cookbook to Chef each time a deployment is run.
My recipe now includes a call to the attributes file to get the build name, like so:
powershell_script 'Copy build files' do
code <<-EOH
$BuildNum = GC \\\\hqfas302002c\\build_share\\drop\\"#{node['copy_builds']['build']}"\\buildlabel.txt
robocopy \\\\hqfas302002c\\build_share\\drop\\"#{node['copy_builds']['build']}"\\webbin W:\\binroot\\$BuildNum /E
EOH
end
And now my call to Chef looks like this:
knife node run_list set $Node "recipe[copy_builds],recipe[install_build]"

WinSCP Disable ResumeSupport in PowerShell

I am using WinSCP to write to connect a SQL Server to an SFTP server. I am trying to write a file to an SFTP server where I only have write access, not modify. I am having a problem because I get back
Cannot create remote file '/xxx.filepart'.
The documentation suggests this is because I do not have modify access to the target directory. I did this WinSCP -> Preferences -> Endurance -> Disable
I checked the winscp.ini file and ResumeSupport is 2 (I believe this means disabled). I ran "echo $transferOptions.ResumeSupport" and it says that it is in a default state.
I have checked this documentation:
https://winscp.net/eng/docs/ui_pref_resume
https://winscp.net/eng/docs/library_transferoptions#resumesupport
However, I don't see a PowerShell example, just C#.
I have tried various permutations of $transferOptions.ResumeSupport.State = Off, $transferOptions.ResumeSupport.Off, and whatnot. One of these says that it's read-only.
I know $transferOptions is a variable here but it comes from the default script. The object determines transfer options $transferOptions = New-Object WinSCP.TransferOptions
Thanks in advance for help
edit: The overall problem is I only have write access to the server, but not modify. I am getting a new error: "Cannot overwrite remote file '/xxx'.$$. It looks like the dollar signs are some sort of temp file that it's trying to create. Is there a way to disable whatever setting is causing this?
Syntax for using an enumeration in PowerShell is described in
the article Using WinSCP .NET assembly from PowerShell.
Enumeration values are accessed using static field syntax [Namespace.Type]::Member, for example [WinSCP.Protocol]::Sftp.
You can find a PowerShell example for TransferResumeSupport.State in Converting to .NET assembly section of get and put command documentation:
$transferOptions = New-Object WinSCP.TransferOptions
$transferOptions.ResumeSupport.State = [WinSCP.TransferResumeSupportState]::Off
$session.GetFiles(..., ..., $False, $transferOptions).Check()
WinSCP GUI can also generate a code template (including TransferOptions and TransferResumeSupportState code) for you.

powershell: run code when importing module

I have developed a powershell module in C#, implemented a few commands.
How can I execute C# code in this module when it's imported by Powershell?
Create a module manifest with the ModuleToProcess (or RootModule in V3) field set to the PSM1 file and the NestedModules set to the DLL e.g.:
RootModule = 'Pscx.psm1'
NestedModules = 'Pscx.dll'
This is what we do in the PowerShell Community Extensions where we do the same thing - fire up a script first. You can see our PSD1 file here.
This is a very basic solution, simply replace code within the {} with your source. (my test below)
add-type 'public class c{public const string s="Hello World";}';[c]::s
enjoy
I'm also writing a binary cmdLet in .NET. I have found that if you create a class that inherits from at least DriveCmdletProvider, that class can implement InitializeDefaultDrives.
This method will get call when import-module is called on your DLL.
You could use this 'feature' to stand up some session (or module session) data.