How do you call another ant target with parameters (-logger org.apache.tools.ant.listener.MailLogger) from another ant script? - email

I have a main ant script, that is used to initiate multiple project's ant script, in a certain sequence.
For each sub-project, I would like to to send out an email, notifying me whether the build was successful or not.
I understand that I can use the flag -logger with org.apache.tools.ant.listener.MailLogger to send out an email after the build finishes.
However, if I have multiple scripts that I want to send out an email, I'm not sure how to pass that flag -logger org.apache.tools.ant.listener.MailLogger into the ant call.
Precisely, I would like to pass the logger flag into this ant call:
< ant antfile="build.xml" dir="subproject/build" target="build" />
I tried using param and args, but didn't succeed.

Good question. Personally I could not make it work with the ant target. It seems flags are not supported.
However, this hack works.
<exec executable="ant.bat">
<arg value="-logger"/>
<arg value="org.apache.tools.ant.listener.MailLogger"/>
<arg value="-f"/>
<arg value="other_build.xml"/>
</exec>
Two immediate issues with this approach:
Not platform independent.
Build reports success when sub-build fails (even with exec's failonerror='true')

Related

Ant Deploy Windows build manifests SVN

I everyone, I am using Eclipse, Subclipse, and ANT. I would like to generate a build manifest with the files that have changed, added, updated, deleted, from the repo (with the individual version numbers on my current system).
<propertyfile file="${dist.dir}\deploymentManifest.txt"
comment="This file is automatically generated - DO NOT EDIT">
<entry key="buildtime" value="${builtat}"/>
<entry key="build" value="${svnversion}"/>
<entry key="version" value="${version}"/>
<entry key="systemLocation" value="${directory/filename.ext}"/>
</propertyfile>
How do I peel that information from the files in Eclipse? or how do I use ANY to retrieve this info?
Thanks,
Frank
Well, ${buildtat} could be taken from the <tstamp> task in Ant. The others could be parsed by doing a svn log --xml and then using the resulting XML from a <xmlproperties> task. Right off the top of my head (i.e. no error checking):
<!-- Gets the Time Stamp -->
<tstamp>
<format property="buildtat" pattern="MM/dd/yyyy HH:MM"/>
</tstamp>
<!-- Generates the revision information you need-->
<exec
executable="svn"
output="${svn.log.file}">
<arg line="log --xml -rHEAD/>
</exec>
<!-- Reads that information into a Property -->
<xmlproperty file="${svn.log.file}"/>
<echo message="Subversion Rev: ${log.logentry{revision}}"/>
However, I'd recommend you look at a continuous build system like Jenkins. Whenever you make a change in your Subversion repository, Jenkins picks up the change and automatically does a new build. Not only does this allow you to verify that your changes don't break your build, but Jenkins can do other things too like run JUnit tests. Jenkins then stores your build and the results of your tests and the whole build log in an easy to get to HTML page.
Where Jenkins will work for you is that Jenkins automatically exposes such things as the Subversion Revision as part of the build process. You can fetch the Subversion Revision, the Jenkins build number, the name of the Jenkins project and many other things as environment variables. Then, you could do this:
<property env="env."/>
<propertyfile file="${dist.dir}\deploymentManifest.txt"
comment="This file is automatically generated - DO NOT EDIT">
<entry key="buildtime" value="${env.BUILD_ID}"/>
<entry key="build" value="${env.SVN_REVISION}"/>
<entry key="version" value="${BUILD_NUMBER}"/>
<entry key="systemLocation" value="${directory/filename.ext}"/>
</propertyfile>
Take a look at Jenkins. It's fairly easy to understand and use.
It should take you about 5 minutes to download and maybe 10 minutes on a Linux system to get up and running. Windows is more complex and might take as long as 15 to 20 minutes to get up and running. You can run it on your desktop system for now, and play around with it.
It should take you maybe another half hour to figure out how to setup a project that can automatically do builds whenever someone does a commit.
Jenkins is web based, but comes with its own light weight web based application engine. All you need is Java 1.6 to run it. (And, if you're using Eclipse, you should already have that).

How do I use MailLogger in an Ant script?

I am using one Ant and Perl script for deploying the patches and build in testing machines.
For this I am invoking different targets like Backup, Unzip, Deploy, Log and Mail for performing deployment.
But I want to send mail a to the developer regarding where our script has failed. For example, if it failed at the Deploy target, I want to mail the developer that deployment failed at the Deploy target, even though it's not reaching the mail task target because it has already failed at the Deploy target.
How can use MailLogger to send mail?
My script is:
<?xml version="1.0"?>
<project name"xyz" default="D">
<target name"Backup">
</target>
<target name"Unzip">
</target>
<target name"Deploy">
</target>
<target name"Log">
</target>
<target name"Mail">
</target>
</project>
Using the maillogger won't tell you the name of the target where your build has failed.
You may use some try/catch/finally construct available via ant addons like
Flaka
Antcontrib / Antelope
and then use the mailtask in the catch block to send your mails, setting the subject, the mailbody and
attachments (f.e. the logs caught with record task) to your like..
Otherwise if you need more feedback for your clients you should consider using a real continous integration tool like Jenkins/Hudson, Cruisecontrol .. Beside mail notification they provide a dashboard with all the details

TFS2010 Build Definition to Deploy to multiple servers?

I've been looking into TFS2010 new build and deployment features with MSDeploy. So far everything is going well (although its been hard to find information about specific scenarios).
Can I modify my Build Definition to specify 2 or more servers to deploy to? What I need to do is deploy to multiple servers (as I have two in my testing environment which uses a NLB).
What I have now is a Build definition which Builds, runs my tests, and then Deploys to ONE of my testing servers (which has the MsDeployAgentService running on it). It works fine, and each web project is deployed as configured in its project file. The MSBuild Arguments I use are:
* /p:DeployOnBuild=True
* /p:DeployTarget=MsDeployPublish
* /p:MSDeployServiceURL=http://oawww.testserver1.com.au/MsDeployAgentService
* /p:CreatePackageOnPublish=True
* /p:MsDeployPublishMethod=RemoteAgent
* /p:AllowUntrustedCertificated=True
* /p:UserName=myusername
* /p:Password=mypassword
NB: I dont use /p:DeployIISAppPath="xyz" as it doesnt deploy all my projects and overrides my project config.
Can I add another build argument to get it to call more than one MSDeployServiceURL? Like something like a second /p:MSDeployServiceURL argument that specifies another server?
Or do I have to look for another solution, such as editing the WF?
I saw an almost exact same question here posted 2 months ago: TFS 2010 - Deploy to Multiple Servers After Build , so it doesn't look like I'm the only one trying to solve this.
I also posted on the IIS.NET forums where MSDeploy is discussed: http://forums.iis.net/t/1170741.aspx . It's had quite a lot of views, but again, no answers.
You don't have to build the project twice to deploy to two servers. The build process will build a set of deployment files. You can then use the InvokeProcess to deploy to multiple servers.
First create a variable named ProjectName. Then add an Assign activity to the "Compile the Project" sequence. This is located in the "Try to Compile the Project" sequence. Here are the properties of the Assign:
To: ProjectName
Value: System.IO.Path.GetFileNameWithoutExtension(localProject)
Here are the properties of our InvokeProcess activity that deploys to the test server:
Arguments: "/y /M:<server> /u:<domain>\<user> /p:<password>"
FileName: String.Format("{0}\{1}.deploy.cmd", BuildDetail.DropLocation, ProjectName)
You will need to change <server>, <domain>, <user>, and <password> to the values that reflect your environment.
If you need to manually deploy to a server you can run the command below from your build folder:
deploy.cmd /y /M:<server> /u:<domain>\<user> /p:<password>
I couldn't find the solution I was looking for, but here's what I came up with in the end.
I wanted to keep the solution simple and configurable within the TFS arguments while at the same time staying in line with the already provided MSBuildArguments method which has been promoted a lot. So I created a new Build Template, and added a new TFS WorkFlow Argument called MSBuildArguments2 in the Arguments tab of the WorkFlow.
I searched through the BuildTemplate WorkFlow for all occurances of the MSBuildArguments (there were two occurances).
The two tasks that use MSBuildArguments are called Run MSBuild for Project. Directly below this task, I added a new "If" block with the condition:
Not String.IsNullOrEmpty(MSBuildArguments2)
I then copied the "Run MSBuild for Project" task and pasted it into the new If's "Then" block, updating its title accordingly. You'll also need to update the new Task's ConmmandLineArguments property to use your new Argument.
CommandLineArguments = String.Format("/p:SkipInvalidConfigurations=true {0}", MSBuildArguments2)
After these modifications, the WorkFlow looks like this:
Save and Check In the new WorkFlow. Update your Build Definition to use this new WorkFlow, then in the build definition's Process tab you will find a new section called Misc with the new argument ready to be used. Because I'm simply using this new argument for deployment, I copied the exact same arguments I used for MSBuild Arguments and updated the MSDeployServiceURL to my second deployment server.
And that's that. I suppose a more elegant method would be to convert MSBuildArguments into an array of strings and then loop through them during the WorkFlow process. But this suits our 2 server requirements.
Hope this helps!
My solution to this is a new Target that runs after Package. Each project that needs to produce a package includes this targets file, and I chose to make the Include conditional on an externally-set "DoDeployment" property. Additionally each project defines the DeploymentServerGroup property so that the destination server(s) are properly filtered depending on what kind of project it is.
As you can see towards the bottom I'm simply executing the command file with the server list, pretty simple.
<!--
This targets file allows a project to deploy its package
As it is used by all project typesconditionally included from the project file
-->
<UsingTask TaskName="Microsoft.TeamFoundation.Build.Tasks.BuildStep" AssemblyFile="$(TeamBuildRefPath)\Microsoft.TeamFoundation.Build.ProcessComponents.dll" />
<!-- Each Server needs the Group metadatum, either Webservers, Appservers, or Batch. -->
<Choose>
<When Condition="'$(Configuration)' == 'DEV'">
<ItemGroup>
<Servers Include="DevWebServer">
<Group>Webservers</Group>
</Servers>
<Servers Include="DevAppServer">
<Group>Appservers</Group>
</Servers>
</ItemGroup>
</When>
<When Condition="'$(Configuration)' == 'QA'">
<ItemGroup>
<Servers Include="QAWebServer1">
<Group>Webservers</Group>
</Servers>
<Servers Include="QAWebServer2">
<Group>Webservers</Group>
</Servers>
<Servers Include="QAAppServer1">
<Group>Appservers</Group>
</Servers>
<Servers Include="QAAppServer2">
<Group>Appservers</Group>
</Servers>
</ItemGroup>
</When>
</Choose>
<!-- DoDeploy can be set in the build defintion -->
<Target Name="StartDeployment" AfterTargets="Package">
<PropertyGroup>
<!-- The _PublishedWebsites area -->
<PackageLocation>$(WebProjectOutputDir)_Package</PackageLocation>
<!-- Override for local testing -->
<PackageLocation Condition="$(WebProjectOutputDirInsideProject)">$(IntermediateOutputPath)Package\</PackageLocation>
</PropertyGroup>
<Message Text="Tier servers are #(Servers)" />
<!-- A filtered list of the servers. DeploymentServerGroup is defined in each project that does deployment -->
<ItemGroup>
<DestinationServers Include="#(Servers)" Condition="'%(Servers.Group)' == '$(DeploymentServerGroup)'" />
</ItemGroup>
<Message Text="Dest servers are #(DestinationServers)" />
</Target>
<!-- Only perform the deployment if any servers fit the filters -->
<Target Name="PerformDeployment" AfterTargets="StartDeployment" Condition="'#(DestinationServers)' != ''">
<Message Text="Deploying $(AssemblyName) to #(DestinationServers)" />
<!-- Fancy build steps so that they better appear in the build explorer -->
<BuildStep
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Message="Deploying $(AssemblyName) to #(DestinationServers)...">
<Output TaskParameter="Id" PropertyName="StepId" />
</BuildStep>
<!-- The deployment command will be run for each item in the DestinationServers collection. -->
<Exec Command="$(AssemblyName).deploy.cmd /Y /M:%(DestinationServers.Identity)" WorkingDirectory="$(PackageLocation)" />
<BuildStep
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(StepId)"
Status="Succeeded"
Message="Deployed $(AssemblyName) to #(DestinationServers)"/>
<OnError ExecuteTargets="MarkDeployStepAsFailed" />
</Target>
<Target Name="MarkDeployStepAsFailed">
<BuildStep
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(StepId)"
Status="Failed" />
</Target>
I am the author of the other similar post. I have yet to find a solution. I believe it is going to be modifying the workflow to add a postprocessing MSBUILD -sync task. That seems to be the most elegant, but was still hoping to find something a bit less intrusive.
I'm not sure if that could help you with TFS 2010, but I have a blog post for TFS 2012: Multiple web projects deployment from TFS 2012 to NLB enabled environment.

Enabling Console output for exec ANT task

Inside eclipse I'm launching an html page with a swf embedded from ANT using the following Macrodef:
<macrodef name="runhtml">
<attribute name="url" />
<attribute name="browser" default="${app.browser.firefox}" />
<sequential>
<exec
executable="open"
vmlauncher="true"
spawn="false"
failonerror="true">
<arg line="-a '#{browser}'" />
<arg line="#{url}" />
</exec>
</sequential>
</macrodef>
Despite the fact that the swf contains traces, I am not getting any output from them in the console. What could be causing this?
In order to get traces from Flash you need to run the Flash Debugger (FDB). Luckily it comes with the Flex SDK. (http://www.adobe.com/devnet/flex/flex-sdk-download.html)
This is a sample task that I am using in Ant to launch the Flash Debugger, which in turn will launch your browser because the target is an HTML file. If the target was a SWF file then it would simply run in a standalone FDB window.
<target name="launch-browser">
<echo file="${basedir}/build/.fdbinit">run file://${outputdir}/swf/index.html
continue</echo>
<exec executable="${sdk.flex}bin/fdb" spawn="false" dir="build">
<arg line="-unit"/>
</exec>
</target>
This task will first write a file called .fdbinit which contains the commands that fdb will run when launched. Then it starts fdb with -unit to make sure it stays properly attached to the ant builder (I'm actually not 100% on this but it is required). This will give you the browser, and the traces (also the actual debugger control) in your terminal window.
--
Alternatively, using your original macrodef, if you have the Flash Debug Player installed on your machine ; you can configure your Flash Player to write the traces to a file by editing your mm.cfg file and setting the TraceOutputFileEnable and TraceOutputFileName options.
This file is found in /Library/Application Support/Macromedia on OSX.
Relevant and additional docs for mm.cfg:
http://help.adobe.com/en_US/flex/using/WS2db454920e96a9e51e63e3d11c0bf69084-7fc9.html
I've got exactly the same problem. Error messages are echoed to the console but info messages are not. The only solution I have found so far is to add your own echo's to the macrodef.
The only way it seems, to also automate it, is to use the .fbinit as describe by Benoit, but putting each command on a different line:
<echo file="${BUILD.dir}/.fdbinit">run file://${outputdir}/swf/index.html
continue</echo>

how to do builds with nant

In my org, we are planning to go for nant for .net web applications. Source control is TFS, visual studio 2008. I would like to know how to do Builds with Nant? How to create msi and deploy the application using Nant? Is separate Build machine is required to do builds with nant? Somebody please help me out. I need step wise process. Thanks in advance.
Thanks
Shanthi
For a step-by step guide to using NAnt I suggest referring to the NAnt project documentation for the fundamental concepts. Once you are familiar with it's basic usage I suggest investigating the nant-contrib project to obtain more build tasks.
One part of your question that I would like to address directly here is the question of whether a separate machine is required to use NAnt. NAnt does not strictly require a separate machine, however a separate machine might be beneficial if your build process is automated or particularly intensive
[Update]
In response to comment from OP:
NAnt views the build process as a series of individual tasks to be performed as part of a target. The normal process for building an application would be to invoke a compiler on the source files in order to produce a binary, NAnt has a number of tasks that invoke language compilers
In this example I will invoke the C# language compiler (csc.exe) using the task in an NAnt build file for a Hello World application that consists of a single source file named hello.cs.
<?xml version="1.0"?>
<project name="Hello World" default="build" basedir=".">
<property name="debug" value="true" overwrite="false" />
</target>
<target name="build" description="compiles the source code">
<csc target="exe" output="HelloWorld.exe" debug="${debug}">
<sources>
<includes name="HelloWorld.cs" />
</sources>
</csc>
</target>
</project>
Let's examine this XML:
<project name="Hello World" default="build" basedir=".">
Things to Note:
The value of the default property is "build". This means that the target named "build" will be invoked if no other target is specified.
This is the build target, as the description states it will compile the source code. To do this the csc task is used. The csc task has a number of options including
target: This specifies the type of binary the target will produce. In this case an executable will be produced
output: specifies the name of the executable file that will be created
debug: The value of this property used a conditional property debug (defined above as false) which will determine whether the compiler produces an executable that contains debugging information
sources & include:
specifies the source files that the compiler will parse in order to produce the executable
As you can see the actions necessary to build the source code are defined in a target. A build file can define many targets which each call many tasks. To produce an MSI file you would inoke a task that produces an MSI file, unfortunately as I don't actually use NAnt regularly, you will have to do some research to find one although I have a feeling the nant-contrib project includes one given how common it is to produce an MSI.
I hope this explanation as clarified things for you
The information in this update has been distilled from this document in the NAnt documentation
Seperate build machine is not necessarily required, but it's definetely recommended.
You'll want to look into using the following tools:
CruiseControl .NET
TFS Plugin for CC.NET