MongoDB migrations for .NET Core - mongodb

I have a .NET Core project that works with mongoDb.
I did the research about data migrations and neither one solution works for me.
There is MongoMigrations NuGet package but it is not compatible with .NET Core.
A goal is to have some method that will check current db version and if it is not up to date, to do the certain update, for example to call some RemoveUserCollection() method or something like that.
Did anyone have the same problem, or even better a solution for that? :)
Thanks!

I've stumbled on this issue a few times myself but it takes pretty much 2mins to just manually bake it into your application startup or just script it out using javascript and deploy it like any other system change.
I've got a PowerShell script like
param([Parameter(Mandatory=$True)][string]$Hostname, [string]$Username = $null, [string]$Password = $null)
if($Username){
$authParameters = "-u $Username -p $Password --authenticationDatabase admin"
}
Write-Host "Running all migration scripts..."
$scripts = Get-Childitem | where {$_.extension -like '.js'} | foreach { $_.Name }
Invoke-Expression "c:\mongodb\bin\mongo.exe --host $Hostname $authParameters $scripts"
then I just dump a load of .js files in the same directory.
;
(function () {
var migration = { _id: ObjectId("568b9e75e1e6530a2f4d8884"), name: "Somekinda Migration (929394)" };
var testDb = db.getMongo().getDB('test');
if (!testDb.migrations.count({ _id: migration._id })) {
testDb.migrations.insert(migration);
print('running ' + migration.name)
// Do Work
testDb.people.find().snapshot().forEach(
function (alm) {
testDb.people.save(
{
modifiedAt: new Date(),
});
}
)
}
})();

Related

Not able to run an exe in jenkins pipeline using powershell

I am trying to execute a process which is written in c# through jenkins pipeline during the build and deployment process.
It is a simple executable which takes 3 arguments, when it gets called from jenkins pipeline using a powershell function it doesn't write any logs which are plenty within the code of this exe, also it does not show anything on the pipeline logs as to what happened to this process. Whereas the logs output is clean before and after the execution of this process i.e. "Started..." & "end" gets printed in the jenkins build log.
When i try to run the same exe on a server directly with the same powershel script it runs perfectly fine. Could you please let me know how can i determine whats going wrong here or how can i make the logs more verbose so i can figure out the root cause.
Here is the code snippet
build-utils.ps1
function disable-utility($workspace) {
#the code here fetches the executable and its supporting libraries from the artifactory location and unzip it on the build agent server.
#below is the call to the executable
Type xmlPath #this prints the whole contents of the xml file which is being used as an input to my exe.
echo "disable exe path exists : $(Test-Path ""C:\Jenkins\workspace\utils\disable.exe"")" // output is TRUE
echo "Started..."
Start-Process -NoNewWindow -Filepath "C:\Jenkins\workspace\utils\disable.exe" -ArgumentList "-f xmlPath 0" #xmlPath is a path to a xml file
echo "end."
}
jenkinsfile
library {
identifier: 'jenkins-library#0.2.14',
retriever: legacySCM{[
$class: 'GitSCM',
userRemoteConfigs: [[
credtialsId: 'BITBUCKET_RW'
url: <htps://gitRepoUrl>
]]
]}
}
def executeStep(String stepName) {
def butil = '.\\build\\build-utils.ps1'
if(fileExists(butil))
{
def status = powershell(returnStatus: true, script: "& { . '${butil}'; ${stepName}; }")
echo status
if(status != 0) {
currentBuild.Result = 'Failure'
error("$StepName failed")
}
}
else
{
error("failed to find the file")
}
}
pipeline {
agent {
docker {
image '<path to the docker image to pull a server with VS2017 build tools>'
lable '<image name>'
reuseNode true
}
}
environment {
#loading the env variables here
}
stages {
stage {
step {
executeStep("disable-utility ${env.workspace}")
}
}
}
}
Thanks a lot in advance !
Have you changed it ? go to Regedit [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System Set "EnableLUA"= 0

PowerShell unable to find type of exception in try/catch block

I'm running a PowerShell Function App (~3) which uses the Az PowerShell module to manage a Storage Account. If any of the operations I carry out result in an error, I am unable to check for specific types in a try/catch block.
It is important to note that where there is no error, operations using the Az.Storage module are successful.
For example, if I were to try and delete a container that does not exist, the example below results in the following error -
Unable to find type [Microsoft.WindowsAzure.Commands.Storage.Common.ResourceNotFoundException].
To obtain the type of exception that is returned, I'm using $_.Exception.GetType().fullname.
I've also tried to add the namespace to the script that may produce the exceptions.
using namespace Microsoft.WindowsAzure.Commands.Storage.Common
Example
Class Storage
{
[AppSettings]$AppSettings = [AppSettings]::GetInstance()
[Object]$Context
Storage()
{
$key = Get-AzStorageAccountKey -ResourceGroupName $this.AppSettings.StorageAccountResourceGroup -Name $this.AppSettings.StorageAccountName
$this.Context = New-AzStorageContext -StorageAccountName $this.AppSettings.StorageAccountName -StorageAccountKey $key[0].Value
}
[void] DeleteBlobContainer([String]$name)
{
try {
Remove-AzStorageContainer -Name $name -Context $this.Context -Force -ErrorAction Stop
}
catch [Microsoft.WindowsAzure.Commands.Storage.Common.ResourceNotFoundException] {
throw [ContainerNotFoundException]::new($name)
}
catch {
throw [DustBinException]::new($_.Exception.Message)
}
}
}
Update
When calling an HTTP triggered function, I am able to see that the Az.Storage module is installed. This is expected, given operations that require the module are successful -
Get-Module -Name Az.Storage -ListAvailable | Select-Object Name, Version, ModuleBase | ConvertTo-Json
[
{
"Name": "Az.Storage",
"Version": {
"Major": 3,
"Minor": 0,
"Build": 0,
"Revision": -1,
"MajorRevision": -1,
"MinorRevision": -1
},
"ModuleBase": "C:\\Users\\dgard\\AppData\\Local\\AzureFunctions\\DustBin\\ManagedDependencies\\201202095548376.r\\Az.Storage\\3.0.0"
}
]
However, if copy the module to .\bin and include a Module manifest to require Microsoft.Azure.Storage.Common.dll, as suggested in this question, the type is still not found.
New-ModuleManifest ./Modules/StorageHelper/StorageHelper.psd1 -RootModule StorageHelper.psm1 -RequiredAssemblies .\bin\Az.Storage\3.0.0\Microsoft.Azure.Storage.Common.dll
To be sure I was adding the correct assembly, I have updated the manifest to include every single assembly in the Az.Storage module, but the type is still not found.
In my question I added an update mentioning that I had tried to add a module manifest requiring all of the Az.Storage assemblies; this was not quite correct...
I had copied the list of required assemblies from the module manifest included with the Az.Storage module, but this did not include Microsoft.Azure.PowerShell.Cmdlets.Storage.dll. Using a module manifest to require this assembly (just this one, no others required) has worked.
New-ModuleManifest ./Modules/StorageHelper/StorageHelper.psd1 -RootModule StorageHelper.psm1 -RequiredAssemblies .\bin\Az.Storage\3.0.0\Microsoft.Azure.PowerShell.Cmdlets.Storage.dll

Assertion over each item in collection in Pester

I am doing some infrastructure testing in Pester and there is repeating scenario that I don't know how to approach.
Let's say, I want to check whether all required web roles are enabled on IIS. I have a collection of required web roles and for each of them I want to assert it is enabled.
My current code looks like this:
$requiredRoles = #(
"Web-Default-Doc",
"Web-Dir-Browsing",
"Web-Http-Errors",
"Web-Static-Content",
"Web-Http-Redirect"
)
Context "WebRoles" {
It "Has installed proper web roles" {
$requiredRoles | % {
$feature = Get-WindowsOptionalFeature -FeatureName $_ -online
$feature.State | Should Be "Enabled"
}
}
}
It works in the sense that the test will fail if any of the roles are not enabled/installed. But that is hardly useful if the output of such Pester test looks like this:
Context WebRoles
[-] Has installed proper web roles 2.69s
Expected: {Enabled}
But was: {Disabled}
283: $feature.State | Should Be "Enabled"
This result doesn't give any clue about which feature is the Disabled one.
Is there any recommended practice in these scenarios? I was thinking about some string manipulation...
Context "WebRoles" {
It "Has installed proper web roles" {
$requiredRoles | % {
$feature = Get-WindowsOptionalFeature -FeatureName $_ -online
$toCompare = "{0}_{1}" -f $feature.FeatureName,$feature.State
$toCompare | Should Be ("{0}_{1}" -f $_,"Enabled")
}
}
}
which would output:
Context WebRoles
[-] Has installed proper web roles 2.39s
Expected string length 27 but was 28. Strings differ at index 20.
Expected: {IIS-DefaultDocument_Enabled}
But was: {IIS-DefaultDocument_Disabled}
-------------------------------^
284: $toCompare | Should Be ("{0}_{1}" -f $_,"Enabled")
...which is better, but it doesn't feel very good...
Also, there is second problem with the fact that the test will stop on first fail and I would need to re-run the test after I fix each feature...
Any ideas?
Put your It inside the loop like so:
Context "WebRoles" {
$requiredRole | ForEach-Object {
It "Has installed web role $_" {
(Get-WindowsOptionalFeature -FeatureName $_ -online).State | Should Be "Enabled"
}
}
}

DSC Script resource does not read environement variable

I have a DSC configuration which install nodejs, adds npm to environment Path variable and then installs a npm module.
xPackage InstallNodeJs {
Name = 'Node.js'
Path = "$env:SystemDrive\temp\node-v4.4.7-x64.msi"
ProductId = '8434AEA1-1294-47E3-9137-848F546CD824'
Arguments = "/quiet"
}
Environment AddEnvironmentPaths
{
Name = "Path"
Ensure = "Present"
Path = $true
Value = "$env:SystemDrive\ProgramData\npm"
}
Script UpgradeNpm {
SetScript = {
& npm install --global --production npm-windows-upgrade
& npm-windows-upgrade --npm-version 3.10.6
}
TestScript = {
$npmVersion = & npm -v
return $npmVersion -eq "3.10.6"
}
GetScript = {
return {#{Result = "UpgradeNpm"}}
}
}
Installing nodejs and adding npm to Path variable seems to be successful. Both nodejs and npm location are added to Path and I can use them both in powershell and cmd.
However, Script resource returns that 'npm' is not recognized as internal or external command ...
the same is for node which is used inside npm-windows-upgrade script file.
Do you know why Script resource cannot read newly added Path entires?
The Environment DSC resource implementation makes the change by updating the values stored in the registry (with the exception of variables targeting Process). Changes made to environment variables stored in the registry are not reflected in the current session (read once, on session start).
You can affect values stored in the current session by:
Using System.Environment.SetEnvironmentVariable ([System.Environment]::SetEnvironmentVariable)
Modifying $env:<VariableName>
Of those, only the first allows you to write a persistent change. The latter can be considered a volatile change.
It's an odd limitation of the resource, I've looked at this before and felt it a little lacking.
There's no dependency information in there, so you can't count on the Environment resource running before the Script resource. There's not enough information in your post to tell if that's the case for sure, but you should consider controlling it anyway:
xPackage InstallNodeJs {
Name = 'Node.js'
Path = "$env:SystemDrive\temp\node-v4.4.7-x64.msi"
ProductId = '8434AEA1-1294-47E3-9137-848F546CD824'
Arguments = "/quiet"
}
Environment AddEnvironmentPaths
{
Name = "Path"
Ensure = "Present"
Path = $true
Value = "$env:SystemDrive\ProgramData\npm"
DependsOn = '[xPackage]InstallNodeJs'
}
Script UpgradeNpm {
SetScript = {
& npm install --global --production npm-windows-upgrade
& npm-windows-upgrade --npm-version 3.10.6
}
TestScript = {
$npmVersion = & npm -v
return $npmVersion -eq "3.10.6"
}
GetScript = {
return {#{Result = "UpgradeNpm"}}
}
DependsOn = '[Environment]AddEnvironmentPaths'
}
Can you share which version of DSC you are using? You can get this by doing a $PSVersionTable on the PowerShell console. I am able to add to the PATH variable and use it in a script resource.
configuration NPMTest
{
Environment AddEnvironmentPaths
{
Name = 'Path'
Ensure = 'Present'
Path = $true
Value = "$env:SystemDrive\ProgramData\npm"
}
Script p
{
GetScript = {#{}}
TestScript = {return $false}
SetScript = {$a = & a.ps1 ; Write-Verbose $a -Verbose}
}
}
The script a.ps1 was executed fine even though I am not specifying the full path to the script.

Configure a DSC Resource to restart

I have a DSC resource that installs dotnet feature and then installs an update to dotnet.
In the Local Configuration Manager I have set RebootNodeIfNeeded to $true.
After dotnet installs, it does not request a reboot (even used xPendingReboot module to confirm this).
Configuration WebServer
{
WindowsFeature NetFramework45Core
{
Name = "Net-Framework-45-Core"
Ensure = "Present"
}
xPendingReboot Reboot
{
Name = "Prior to upgrading Dotnet4.5.2"
}
cChocoPackageInstaller InstallDotNet452
{
name = "dotnet4.5.2"
}
}
This is a problem as dotnet doesn't work properly with our app unless the server has been rebooted and we are trying to make these reboots happen automatically no user input required.
Is there any way to make a resource push to the localdscmanager (LCM) that it needs a reboot when there's something being installed?
I have found the below command
$global:DSCMachineStatus = 1
Which sets a reboot. but I'm unsure as to how to use it to reboot right after the 4.5 module is installed.
Normally when I install .Net it works without rebooting, but if you want to force your configuration to reboot it after it installs it you can do the following. It won't work for drift (.net being removed after initial installation.) During configuration drift, the configuration will still install .net, but the script resource I added to reboot will believe it has already rebooted.
The DependsOn is very important here, you don't want this script running before the WindowsFeature has run successfully.
configuration WebServer
{
WindowsFeature NetFramework45Core
{
Name = "Net-Framework-45-Core"
Ensure = "Present"
}
Script Reboot
{
TestScript = {
return (Test-Path HKLM:\SOFTWARE\MyMainKey\RebootKey)
}
SetScript = {
New-Item -Path HKLM:\SOFTWARE\MyMainKey\RebootKey -Force
$global:DSCMachineStatus = 1
}
GetScript = { return #{result = 'result'}}
DependsOn = '[WindowsFeature]NetFramework45Core'
}
}
To get $global:DSCMachineStatus = 1 working, you first need to configure Local Configuration Manager on the remote node to allow Automatic reboots. You can do it like this:
Configuration ConfigureRebootOnNode
{
param (
[Parameter(Mandatory=$true)]
[ValidateNotNullOrEmpty()]
[String]
$NodeName
)
Node $NodeName
{
LocalConfigurationManager
{
RebootNodeIfNeeded = $true
}
}
}
ConfigureRebootOnNode -NodeName myserver
Set-DscLocalConfigurationManager .\ConfigureRebootOnNode -Wait -Verbose
(code taken from colin's alm corner)