Problem with finding ApplicationManifest path for service fabric application - azure-service-fabric

I have a TeamCity CI server which pushes the service fabric app in a zip to octopus. This goes well.
My connection from octopus to the azure service fabric cluster is healthy.
I make a release from de zip package in my project
I Deploy the package. Acquire packages goes well
Deploy step goes wrong
It says the following in the logs: Deploying package: E:\Octopus\Packages\Spaces-1\feeds-builtin\xxSF\xxSF.1.0.0.225.zip
I have tried to change the path in Octopus Deploy but I cannot find the place where you can change this. I have read the documentation for octopus deploy but it did not help
Get-Content : Cannot find path ‘C:\Octopus\Work\20190501091314-1918-1033\staging\ApplicationManifest.xml’ because it does not exist.
I do not understand where this path comes from, my guess is that this path is the place where the packages are. My ApplicationManifest.xml is in the zip package. Does anyone know what I have to do here?
Checked the RAW logs
I saw this:
Info | Deploying package: E:\Octopus\Packages\Spaces-1\feeds-builtin\xx\xx.1.0.0.225.zip
Verbose | Extracting package to: C:\Octopus\Work\20190501091314-1918-1033\staging
Verbose | Extracted 90 files
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\packages.config’
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\ApplicationPackageRoot\ApplicationManifest.xml’
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\ApplicationParameters\Cloud.xml’
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\ApplicationParameters\Local.1Node.xml’
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\ApplicationParameters\Local.5Node.xml’
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\PackageRoot\ServiceManifest.xml’
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\PackageRoot\Config\Settings.xml’
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\PublishProfiles\Cloud.xml’
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\PublishProfiles\Local.1Node.xml’
Verbose | Performing variable substitution on ‘C:\Octopus\Work\20190501091314-1918-1033\staging\PublishProfiles\Local.5Node.xml’
Get-Content : Cannot find path ‘C:\Octopus\Work\20190501091314-1918-1033\staging\ApplicationManifest.xml’ because it
Error | does not exist.
It looks like it cannot find the subfolder staging\ApplicationPackageRoot, ApplicationManifest.xml is in this subfolder.

I fixed the problem. This is the solution:
Make sure your package that you upload from whatever CI Server you use has the following directories/file and structure:
ApplicationParameters(Folder)
PublishProfiles(Folder)
YourServiceFabric(Folder with your service fabric application) This has the same name as ServiceManifestName that is mentioned in the ApplicationManifest.xml. This name is specific so make sure that you have the right name. You will have to build the sfproj in order to get the .dll put in this folder.
ApplicationManifest.xml (File)
This is all that Octopus Deploy need to deploy your service fabric application.
My approach was as following: I packaged the Service Fabric Application in Visual Studio and I saw that there where a bunch of specific files that where packed as stated above. I then manually uploaded this package to the octopus deploy server and created a release/deploy. This went well so I gathered the same files from the CI server and pushed this to Octopus Deploy and it worked.

Related

How to read helm environment specific variables (replaced by octopus) in nodejs application

I have nodejs application deployed in Octopus using helm.
I want to read appVersion value in '.yaml file' that is replaced by octopus.
How can I read that in nodejs application
I had a similar problem but I solved it using a shell script and parsing the line I needed.
As the order of lines in YAML is irrelevant, I put the the appVersion line at last, then used the code below to get the version.
tail -n 1 helm/pfweb/Chart.yaml | awk '{print $2}'
You could run this code inside Node code using Child Process
Or you can read file from Node using readFileSync and parse which line you want. IMHO, this is more painful way to solve it, because I don't program in Node.

Jenkins Powershell Output

I would like to capture the output of some variables to be used elsewhere in the job using Jenkins Powershell plugin.
Is this possible?
My goal is to build the latest tag somehow and the powershell script was meant to achieve that, outputing to a text file would not help and environment variables can't be used because the process is seemingly forked unfortunately
Besides EnvInject the another common approach for sharing data between build steps is to store results in files located at job workspace.
The idea is to skip using environment variables altogether and just write/read files.
It seems that the only solution is to combine with EnvInject plugin. You can create a text file with key value pairs from powershell then export them into the build using the EnvInject plugin.
You should make the workspace persistant for this job , then you can save the data you need to file. Other jobs can then access this persistant workspace or use it as their own as long as they are on the same node.
Another option would be to use jenkins built in artifact retention, at the end of the jobs configure page there will be an option to retain files specified by a match (e.g *.xml or last_build_number). These are then given a specific address that can be used by other jobs regardless of which node they are on , the address can be on the master or the node IIRC.
For the simple case of wanting to read a single object from Powershell you can convert it to a JSON string in Powershell and then convert it back in Groovy. Here's an example:
def pathsJSON = powershell(returnStdout: true, script: "ConvertTo-Json ((Get-ChildItem -Path *.txt) | select -Property Name)");
def paths = [];
if(pathsJSON != '') {
paths = readJSON text: pathsJSON
}

How to pass a parameter to Chef recipe from external source

I'm new to Chef and seeking help here. I'm looking into using Chef to deploy our builds to Chef node servers (Windows Server 2012 machines). I have a cookbook called copy_builds that goes out to a central repository and selects the build we want to deploy and copies it out to the node server. The recipe I have contains basic steps that perform the copy steps, and this recipe could be used for all builds we want to deploy except for one thing: the build name.
Here is an example of the recipe:
powershell_script 'Copy build files' do
code '
$Project = "Dev3_SomeCoolBuild"
net use "\\\\server\\build_share\\drop\\$Project"
$BuildNum = GC "\\\\server\\build_share\\drop\\$Project\\buildlabel.txt"
robocopy \\\\server\\build_share\\drop\\$Project\\bin W:\\binroot\\$BuildNum'
end
As you can see, the variable $Project contains the name of the build in this recipe. If we have 100 different builds, all with different names, then what is the best way to handle this without creating 100 different recipes for my copy_builds cookbook?
BTW: this is how I'm currently calling Chef to deploy, which is in a PowerShell script that's external to Chef:
knife node run_list set $Node "recipe[copy_builds::$ProjectName],recipe[install_build]"
This command (from the external PowerShell script) contains the project/build name info within it's own $ProjectName variable. In this case $ProjectName contains the value of 'Dev3_SomeCoolBuild', to reference the recipe Dev3_SomeCoolBuild.rb.
What I'd like is have just one default recipe under copy_builds cookbook, and pass in the build/project name. Is this possible? And what is the best way to do it? I've read about data bags, attributes, and providers, but not sure if they would work for what I want.
Please advise.
Thanks,
Keith
The best approach for you is likely to use a single recipe that gets a list of projects to deploy from a databag or node attributes (or both). So basically take what you have now and put it in a loop, and then use either roles to set node attributes or put the project mapping into a databag item.
I ended up using attributes here to solve my problem. I updated my script to write the build name to the attributes/default.rb file for the copy_builds recipe and upload the cookbook to Chef each time a deployment is run.
My recipe now includes a call to the attributes file to get the build name, like so:
powershell_script 'Copy build files' do
code <<-EOH
$BuildNum = GC \\\\hqfas302002c\\build_share\\drop\\"#{node['copy_builds']['build']}"\\buildlabel.txt
robocopy \\\\hqfas302002c\\build_share\\drop\\"#{node['copy_builds']['build']}"\\webbin W:\\binroot\\$BuildNum /E
EOH
end
And now my call to Chef looks like this:
knife node run_list set $Node "recipe[copy_builds],recipe[install_build]"

How to deploy to multiple environments with webpack using msdeploy

I've got a .NET WebAPI solution, and a UI built in Angular2 RC4 (angular-cli webpack version). I'm confused about how to deploy these to different environments, especially configuration parameters - there seems to be a mismatch between the .NET way and the UI way of doing things, which I don't quite get.
Here's how I've got it currently in TeamCity. The WebAPI solution is built once only, and is configured at deploy time. The various configuration parameters the project needs (such as connection strings, endpoints etc.) are stored in web.config. When I deploy to my test environment using MSDeploy, I pass in setParam arguments to the MSDeploy command line which replaces the connection strings and endpoints in the web.config with those values. When I deploy to production, I use the same build but pass in different arguments to the setParam in the command line.
This approach makes sense to me because I know that the exact same build is going from one environment to the next, the only difference being the parameters I specifically told it to set for each environment. Super.
With Angular2 and webpack it looks like a different approach is needed. When I build my project (with ng build -prod) it minimizes and bundles my HTML and Javascript files into 3 or 4 files, along with gzipped versions of those files. This is great for reducing file size and increasing speed of my website, but there is no way to "inject" configuration parameters into these gzip files like there is with MSDeploy's setParam. Everywhere I've seen that mentions webpack is showing webpack.dev.config.js and webpack.prod.config.js. But doesn't that mean we need to build a different bundle for each environment? And actually with Angular2 the webpack bit is considered "a black box" and it's not possible to supply your own webpack.config file anyway.
The only workaround I can think of is to use TeamCity's "File Content Replacer" on the "main.1234abcd6946c6a08519.bundle.js" to replace my configuration parameters with the values for that environment, then gzip that file - overwriting the one created by webpack.
But this is horrible, so I'm looking for any better suggestions?
I don't have any experience with webpack or if this is better than your workaround but you can use the TextFile kind of setParam entry to alter any file in your project using Regex find/replace at deploy time.
https://technet.microsoft.com/en-us/library/dd569084(v=ws.10).aspx
I went with creating a separate package for each environment. I added a build step that replaces my API URL on localhost in src\app\environment.ts, with the appropriate URL for that environment, then it runs npm build -prod and then MSDeploy to create the package. I do this for all environments I want to target.
Here's the script:
REM =====CREATE TEST PACKAGE==================================================
REM backup the environment file
ren src\app\environment.ts environment.ts.bak
copy /Y src\app\environment.ts.bak src\app\environment.ts
REM replace localhost in environment file with the TEST environment URL
"%env.FART%" src\app\environment.ts http://localhost:12345 %TEST.api.url%
REM build using this environment
call npm run build-prod
REM restore backup environment file
del /Q src\app\environment.ts
ren src\app\environment.ts.bak environment.ts
REM create TEST package
"%env.MSDEPLOY%" ^
-verb:sync ^
-source:contentPath="%teamcity.build.workingDir%\dist" ^
-dest:package="%teamcity.build.checkoutDir%\Package_TEST.zip"
REM =====CREATE PROD PACKAGE==================================================
REM backup the environment file
ren src\app\environment.ts environment.ts.bak
copy /Y src\app\environment.ts.bak src\app\environment.ts
REM replace localhost in environment file with the PROD environment URL
"%env.FART%" src\app\environment.ts http://localhost:12345 %PROD.api.url%
REM build using this environment
call npm run build-prod
REM restore backup environment file
del /Q src\app\environment.ts
ren src\app\environment.ts.bak environment.ts
REM create PROD package
"%env.MSDEPLOY%" ^
-verb:sync ^
-source:contentPath="%teamcity.build.workingDir%\dist" ^
-dest:package="%teamcity.build.checkoutDir%\Package_PROD.zip"
By the way, %env.FART% is the location of fart.exe which is a great find/replace tool that I use to replace one string in a file with another.

dpkg: How to use trigger?

I wrote a little CDN server that rebuilds its registry pool when new pool-content-packages are installed into that registry pool.
Instead of having each pool-content-package call the init.d of the cdn-server, I'd like to use triggers. That way it would restart the server only once at the end of an installation run, after all packages were installed.
What have I to do to use triggers in my packages with debhelper support?
What you are looking for is dpkg-triggers.
One solution with use of debhelper to build the debian packages is this:
Step 1)
Create file debian/<serverPackageName>.triggers (replace <serverPackageName> with name of your server package).
Step 1a)
Define a trigger that watch the directory of your pool. The content of file would be:
interest /path/to/my/pool
Step 1b)
But you can also define a named trigger, which have to be fired explicit (see step 3).
content of file:
interest cdn-pool-changed
The name of the trigger cdn-pool-changed is free. You can take what ever you want.
Step 2)
Add handler for trigger to file debian/<serverPackageName>.postinst (replace <serverPackageName> with name of your server package).
Example:
#!/bin/sh
set -e
case "$1" in
configure)
;;
triggered)
#here is the handler
/etc/init.d/<serverPackageName> restart
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0
Replace <serverPackageName> with name of your server package.
Step 3) (only for named triggers, step 1b) )
Add in every content package the file debian/<contentPackageName>.triggers (replace <contentPackageName> with names of your content packages).
content of file:
activate cdn-pool-changed
Use same name for trigger you defined in step 1.
More detailed Information
The best description for dpkg-triggers I could found is "How to use dpkg triggers". The corresponding git repository with examples you can get here:
git clone git://anonscm.debian.org/users/seanius/dpkg-triggers-example.git
I had a need and read and re-read the docs many times. I think that the process is not clearly explain or rather what goes where is not clearly explained. Here I hope to clarify the use of Debian package triggers.
Service with Configuration Directory
A service reading its settings in a specific directory can mark that directory as being of interest.
Say I create a new service which reads settings from /usr/share/my-service/config/...
That service gets two additions:
In its debian directory I add my-service.triggers
And here are the contents:
# my-service.triggers
interest /usr/share/my-service/config
This means if any other package installs or removes a file from that directory, the trigger enters its "needs to be run" state.
In its debian directory I also add my-service.postinst
And I have a script as follow to check whether the trigger happened and run a process as required:
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "/usr/share/my-service/config" ]
then
# this may or may not be what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
That's it.
Now packages adding extensions to your service can add their own configuration file(s) under /usr/share/my-service/config (or a directory under /etc/my-service/my-service.d/... or /var/lib/my-service/..., although that last one should be reserved for dynamic files, not files installed from a package) and dpkg automatically calls your postinst script with:
postinst triggered /usr/share/my-service/config
# where /usr/share/my-service/config is your <interest-path>
This call happens only once and after all the packages were installed, hence the advantage of having a trigger in the first place. This way each package does not need to know that it has to restart my-service and it does not happen more than once, which could cause all sorts of side effects (i.e. the service tries to listen on a TCP port and get error: address already in use).
IMPORTANT: keep in mind that the postinst should include a line with #DEBHELPER#.
So you do not have to do anything special in other packages. Only make sure to install the configuration files in the correct directory and dpkg picks up from there (i.e. in my example under /usr/share/my-service/config).
I have an extension to BIND9 called ipmgr which makes use of .ini files saved in a specific folder. It uses the files to generate DNS zones (way less errors that way! and it includes support for getting letsencrypt certificates and settings for dmarc/dkim). This package uses this case: a simple directory where configuration files get installed. Other packages do not need to do anything other than install files in the right place (/usr/share/ipmgr/zones, for this package).
Service without a Configuration Folder
In some (rare?) cases, you may need to trigger something in a service which is not driven by the installation of a new configuration file.
In this case, you can use an arbitrary name (it should include your package name to make sure it is unique since this name is global to the entire Debian/Ubuntu system).
To make this one work, you need three files, one of which is a trigger in the other packages.
State the Interest
As above, we have an interest. In this case, the interest is stated as a name on its own. The dpkg system distinguish between a name and a path because a name cannot include a slash (/) character. Names are limited to ASCII except control characters and spaces. I would suggest you stick to a-z, 0-9 and dashes (-).
# my-service.triggers
interest my-service-settings
This is useful if you cannot simply track a folder. For example, the settings could come from a network connection that a package offers once installed.
Listen for the Triggers
Again, as above, you need a postinst script in your Service Package. This captures the trigger and allows you to run a command. The script is the same, only you test for the name instead of the folder (note that you can have any number of triggers, so you could also have both: a folder as above and a special name as here).
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "my-service-settings" ]
then
# this may or may not what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
The Trigger
As mentioned above, we need a third file. An arbitrary name is not going to be triggered automatically by dpkg. It wouldn't know whether your other package needs to trigger something just like that (although it is fairly automated as it is already).
So in other packages, you create a trigger file which looks like this:
# other-package.triggers
activate my-service-settings
Now we recognize the name, it is the same as the interest stated above.
In other words, if the trigger needs to run for something other than just the installation of files in a given location, use a special name and add this triggers file with the activate keyword.
Other Features
I have not tested the other features of the dpkg-trigger(1) tool. There are other keywords support in the triggers files:
interest
interest-await
interest-noawait
activate
activate-await
activate-noawait
The deb-triggers manual page has additional information about those. I am not too sure what the await/noawait implies other than the trigger may happen at any time when nowait is used.
Automatic Trigger Added
The build system on Ubuntu (probably Debian too) automatically adds a triggers file with the following when your package includes a library:
$ cat triggers
# Triggers added by dh_makeshlibs/11.1.6ubuntu2
activate-noawait ldconfig
I suggest you exercise caution if your package includes libraries. If you have your own triggers file, I do not know whether this addition will still happen automatically.
This also shows us a special case where it wants to use the noawait. If I understand correctly, it has to run the ldconfig trigger ASAP so your commands will work as expected after the unpack. Otherwise ldd will not know anything about your newly installed library.