I am working on a project which has multiple modules with their own doxyfiles. My idea is to have a single master doxyfile which can include other private doxyfiles to create a one big documentation of the project. The directory structure looks like following:
MyProject+
|-Private_Prj1+
| |-Doc+
| |-doxyfile_privateprj1
|-Private_Prj2+
|-Doc+
|-doxyfile_privateprj2
|-Doc+
|-doxyfile_myproject(AKA Master Doxyfile)
How can I configure the doxyfile_myproject to include doxyfile_privateprj1 and doxyfile_privateprj2 in such a way that when I run Doxygen on the doxyfile_myproject, it then sequentially runs other doxyfiles?
Related
I have a PyQt5 (5.15.6) application running in Python 3 and want to reference my qss file as such
qss_file = QtCore.QFile("my_app_qss.qss")
However, I have multiple apps that use the same qss file so depending on where I run the app from I need an absolute import rather than a relative import. I would also like to compile any of those apps with pyinstaller and deploy them to another machine. How can I reference this qss file?
example folder structure
main
| - resources/my_app_qss.qss
| - apps/
|--------project1/app1.py
| -------project2/
|-----------------subfolder/app2.py
The issue is that I did not understand that
qss_file = QtCore.QFile("my_app_qss.qss")
Is not a path to a file. It is referencing a file that gets built by pyrcc4 from the .qrc source
I have a module, built with Dist::Zilla. I have Dist::Zilla set up to automatically push changes out to my GitHub repo. Works great when the repo is private.
However, as soon as I make the repo public, I start getting errors during the build process. Specifically, these lines in the dist.ini
[Bugtracker]
web = http://github.com/myaccount/%s/issues
If I comment out these lines, it works. With these lines left in, I get an error:
Duplication of element resources.bugtracker.web at /Users/me/perl5/perlbrew/perls/perl-5.24.1/lib/site_perl/5.24.4/Dist/Zilla.pm line 595.
OK, so fine, I comment out the lines. However, another problem crops up. The version number of my builds no longer autoincrements and is stuck at the same number every time I try to release a build.
Is there some configuration setting I need to change with Dist::Zilla so it will play nice with public github repos? Here is the full dist.ini file:
name = Module-Test
author = me
license = Perl_5
copyright_holder = Me
copyright_year = 2018
[Repository]
;[Bugtracker]
;web = http://github.com/sdondley/%s/issues
[Git::NextVersion]
[GitHub::Meta]
[PodVersion]
[PkgVersion]
[NextRelease]
[Run::AfterRelease]
run = mv Changes tmp && cp %n-%v/Changes Changes
[InstallGuide]
[PodWeaver]
[ReadmeAnyFromPod]
type = markdown
location = root
phase = release
[Git::Check]
[Git::Commit]
allow_dirty = README.mkdn
allow_dirty = Changes
allow_dirty = INSTALL
[Git::Tag]
[Git::Push]
[Run::AfterRelease / MyAppAfter]
run = mv tmp/Changes Changes
[GatherDir]
[AutoPrereqs]
[PruneCruft]
[PruneFiles]
filename = weaver.ini
filename = README.mkdn
filename = dist.ini
filename = .gitignore
[ManifestSkip]
[MetaYAML]
[License]
[Readme]
[ExtraTests]
[ExecDir]
[ShareDir]
[MakeMaker]
[Manifest]
[TestRelease]
[FakeRelease]
Your [Bugtracker] entry leads to duplication because you are also setting the bugtracker through [GitHub::Meta]. Choose one or the other.
As for version number management, note that [Git::NextVersion] is based on your git tags. Make sure that these tags are present in your local repository and have the correct format. That plugin uses a command line invocation similar to this to obtain all tags:
git rev-list --simplify-by-decoration --pretty=%d HEAD | grep -oE 'tag: [^,)\s]+'
Public GitHub repos should not be a problem for Dist::Zilla – this is exactly the setup most dzil distros use anyway. But interactions between multiple plugins can lead to hard to track down bugs, especially since the order of plugins is important. It can help to organize your plugins by the phase in which they run, and to test whether the problem persists after removing optional plugins. It also tends to be better to start with a simple dist.ini and add plugins as pain points in your development process become apparent.
I'm using Terraform in a modular fashion in order to build out my infrastructure. I do this by having a configuration file that calls in the different modules. I want to pass an infrastructure variable which picks up what tagged version of the Github repository the application should be building out. Most importantly I'm trying to figure out how to make a concatenation of a string happen in the "source" variable of the configuration file.
module "athenaelb" {
source = "${concat("git::https://github.com/ORG/REPONAME.git?ref=",var.infra_version)}"
aws_access_key = "${var.aws_access_key}"
aws_secret_key = "${var.aws_secret_key}"
aws_region = "${var.aws_region}"
availability_zones = "${var.availability_zones}"
subnet_id = "${var.subnet_id}"
security_group = "${var.athenaelb_security_group}"
branch_name = "${var.branch_name}"
env = "${var.env}"
sns_topic = "${var.sns_topic}"
s3_bucket = "${var.elb_s3_bucket}"
athena_elb_sns_topic = "${var.athena_elb_sns_topic}"
infra_version = "${var.infra_version}"
}
I want it to compile and for the source to look like this (for example): git::https://github.com/ORG/REPONAME.git?ref=v1
Anyone have any thoughts on how to make this work?
Thanks,
Keren
This is not possible currently in Terraform itself.
The only way to achieve something like this is to use a separate script to interact with the git repository that Terraform clones into a subdirectory of the .terraform/modules directory and switch it to a different tag depending on which version you need. This is non-ideal since Terraform organizes these into directories based on a hash of the module path, but if you can identify the module in question it is safe to run git checkout within these repositories as long as you do not run terraform get again afterwards.
For more details and discussion on this issue, see issue #1439 in Terraform's issue tracker, where this feature was requested.
You could use envsubst or python jinja and use these wrapper scripts in your pipeline deploy script to actually build the scripts from .envsubst and .jinja files before your terraform plan/apply
https://github.com/uvoo/process-templates/tree/main/scripts
I wish terraform would support this but my guess is they never will so just add some simple functions/files into deploy scripts which is usually the best way to deploy.
I'm trying to setup a deploy process that targets 16 web sites each hosting an instance of the same application.
Websites and AppPools are named as such:
appServer1:
app10.site.com
app11.site.com
app12.site.com
app13.site.com
appServer2:
app20.site.com
app21.site.com
app22.site.com
app23.site.com
etc.
etc.
...with each website having a correspondingly named AppPool.
I am desperately trying to determine how to use a single Deploy NuGet Package step to target all of these websites/app pools using variables and a combination of powershell scripts if possible.
I'd like to have a single step where I can variable substitute the website and app pool names. As this is the only difference. I basically need the equivalent of being able to loop the nuget package step passing it a list of website and app pool names. I cannot simply use variables because I can only resolve to the machine level with variable scoping.
Create list of all Website and AppPool names, iterate them passing each value to a Step for execution. ForEach processing step for lack of better words.
I do have the ability to rename the AppPools if need be for a more consistent pattern, but I cannot change the website names
Any ideas would be greatly appreciated.
http://help.octopusdeploy.com/discussions/questions/3481-every-website-in-the-deploy-has-a-different-apppool-and-website-name-how-to-deal-no-other-differences
There's a lot to your question, but I'm going to take a stab at explaining our approach, in hopes of jogging your creative juices.
tl;dr
simply put, use your own powershell scripts to install the web-application. In there you can set the app pool name on a per website basis
For starters, we do do a separate deployment step for each project. The scripts we use will allow you to do all deployments from a single deploy.ps1 (including unique appPool names), but we find that it really helps keep each deployment nice and lean, and easy to manage. Each project get's it's own nupkg and therein contains the predeploy.ps1, deploy.ps1, and postdeploy.ps1 as well as a folder of build/deploy scripts that we've open sourcesd, and a folder of environment config xml files.
A sample of an environment config would be this. The name is simply [envName].xml
<!-- environments\Production.xml -->
<environmentSettings>
<webSites>
<app>
<physicalPathRoot>c:\inetpub</physicalPathRoot>
<physicalFolderPrefix>appname</physicalFolderPrefix>
<siteProtcol>https</siteProtcol>
<siteName>appname.tld</siteName>
<siteHost>appname.tld</siteHost>
<portNumber>443</portNumber>
<appPath>/</appPath>
<appPool>
<name>appname.tld</name>
<!-- valid identityTypes are: [LocalSystem, LocalService, NetworkService, SpecificUser, ApplicationPoolIdentity] -->
<identityType>NetworkService</identityType>
<!-- Set this value to the User the Service will run under in the format DOMAIN\username -->
<!-- If Running as 'NetworkService' then 'NT AUTHORITY\Network Service' is used -->
<userName>NT AUTHORITY\Network Service</userName>
<!-- Leave blank unless using SpecificUser -->
<password></password>
<maxWorkerProcesses>5</maxWorkerProcesses>
</appPool>
</app>
</webSites>
<serverDatabase>
<name>database_name</name>
<connectionString>REPLACED BY OCTOPUS</connectionString>
<providerName>System.Data.SqlClient</providerName>
</serverDatabase>
</environmentSettings>
You can see in the corresponding Get-EnvironmentSettings.ps1 where we load up the config, and then update it with any Octopus variables. This is the trickiest part, because we use dot-Notation to update the paths (case sensitive).
Our octopus variables really only contain information that is secret, as everything else lives in [environment].xml
| Name | Value | Scope
--------------------------------------------------------------------------
| webSites.app.appPool.password | supersecret | Production
So now a typical deployment script simply imports the modules, grab environmentSettings, update config, and install the web app.
# Top of the script, get Octopus environment and version
param(
[string] $version = $OctopusPackageVersion,
[string] $environment = $OctopusEnvironmentName
)
# Make sure a failed deployment actually fails
$ErrorActionPreference = "Stop"
# Import the modules
$currentDir = Split-Path $script:MyInvocation.MyCommand.Path
$moduleDir = "$currentDir\modules"
Import-Module BuildDeployModules
# Grab the environment settings
$environmentSettings = Get-EnvironmentSettings $environment "//environmentSettings"
$databaseSettings = $environmentSettings.serverDatabase
$websiteSettings = $environmentSettings.webSites.app
# update the config
Update-XmlConfigValues $currentDir\website\Web.config "//appSettings/add[#key='databaseName']" $($databaseSettings.name) "value"
Update-XmlConfigValues $currentDir\website\Web.config "//connectionStrings/add[#name='databaseConnection']" $($databaseSettings.connectionString) "connectionString"
Update-XmlConfigValues $currentDir\website\Web.config "//connectionStrings/add[#name='databaseConnection']" $($databaseSettings.providerName) "providerName"
# Install the web application
Install-WebApplication $environment $websiteSettings $version "anonymousAuthentication"
In doing all of this, the web application is installed into IIS with a specific application pool, and appropriate config transforms without relying on any unknowns.
Our nupkg structure looks something like this
appname.1.2.3.4.nupkg
environments
dev.xml
staging.xml
qual.xml
production.xml
modules
[all of our build modules]
website
[all of our website files]
This is super repeatable, easy to maintain, and easy to edit config. Hope it helps
I am trying to specify rather complicated labeling rule in VCS of Teamcity. Not sure if what I am trying to do is possible or not.
This is my directory structure I have inherited.
mysvn/abc/repos
|
-TestDomain
-TestSystem
|
-MyFrameWork
-MySoftware
|
-MySoftwareDevices
-MySoftwareFiles
|
-branches
-tags
-trunk1
-MySoftwareDriver
|
- branches
- tags
- trunk2
I want to specify such a rule that in the working directory of the checkout directory of Teamcity has structure like this:
Teamcity checkout directory
|
-FolderA
-FolderB
Where FolderA has contents of trunk1 and FolderB has contents of trunk2.
Is it possible to be done?
SVN URL like:
mysvn/abc/repos/TestSystem/MySoftware/MySoftwareFiles/trunk1
does give me trunk1. But I need contents of trunk1 and trunk2 under two different folders in the same build checkout directory.
Labeling I have been using: trunk=>tags
Ok got the thing working. Its kind of hit and trial. Hoping someone comes up with proper explanations.
Labeling rules still has:
trunk=>tags
The SVN URL : mysvn/abc/repos/TestSystem
It is the checkout rules which I added:
-:.
+:MySoftware/MySoftwareFiles/trunk1=>MyDir/FolderA
+:MySoftware/MySoftwareDriver/trunk2=>MyDir/FolderB