Can Nexus or Artifactory store simple tar.gz artifacts? - deployment

I have cloud servers located in separate data centers across the world. Each data center is separate from the others.
I'm looking for an easy way to deploy artifacts to individual clusters of servers (that may be running different versions of software i.e. a dev, test, and production cluster) in each of these regions with ease and consistency. It seems to me that an artifact server is what I need because I could execute an install script on the cloud server, which pulls down the correct software artifact.
Now, I work on the operations side. I don't care about doing builds, or managing software build dependencies. I simply want an artifact server where I can store all the different versions of my packages for access at a later time. The kicker, is that I have several different types of artifacts to store.
Shell scripts
Python scripts
Puppet manifests
Debian files (often delivered as a tar.gz file of multiple debians)
Can Nexus or Artifactory manage all of these types of packages, or should I be looking in a different direction? I'm not opposed to adding make files to my shell script projects that simply generate tar.gz files. I just don't want to go down the path of setting up an artifact repository, when ultimately, a little scripting, wget, and an apache server would work just fine.

Both Artifactory and Nexus can handle any type of file, as they both are "Binary Repository Managers".
Albeit that, Nexus can technically store any file, but lacks support for binaries that do not adhere to the Maven repository layout. For example, such files will not be indexed and cannot be retrieved in searches; Also, if non-Maven artifacts encumber module information in their path, then currently Artifactory is the only repository that can make use of that and allow version based operations on artifacts (e.g., download latest version query)
Although both of these tools have started out by solving a problem in the Maven world, the need for smart binary management has been recognized in many other fields, operations included.
Binaries do need a specialized manager, and although network shares/SCM/file servers seem like a viable option in the beginning; they just don't scale.
Also see my answer to a similar question for some of the benefits of a manager over the other ad-hoc solutions.

Yes, you can upload non-jar files. For example:
mvn deploy:deploy-file -DgroupId=org.group.id -DartifactId=artifact-id -Dversion=0.0.0.1-SNAPSHOT -Dpackaging=tar.gz -DrepositoryId=repository-id -Durl=http://url -Dfile=localfile-0.0.0.1-SNAPSHOT.tar.gz
Newer versions of Nexus will handle certain files like tar, swf, and others by validating that they are properly formed. This may cause unexpected or unwanted behavior, though.
Is this the best way to go... only you can say based on your use cases. Factors like how often artifacts change, network latency, and others can make or break a strategy.
refs:
https://stackoverflow.com/a/33311645/32453
http://betterlogic.com/roger/2012/04/mavennexus-upload-tgztar-gz-file/

If you want to do it with curl, try this approach: https://support.sonatype.com/entries/22189106-How-can-I-programatically-upload-an-artifact-into-Nexus-

You can (see the other answers). You can also refer to them for instance like this (though an example would be nice):
You can refer to/use them like this plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy</id>
<phase>package</phase>
<goals>
<goal>copy</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>org.apache</groupId>
<artifactId>activemq-distro</artifactId>
<version>5.7.0</version>
<type>gz</type>
<overWrite>true</overWrite>
<outputDirectory>${project.build.directory}</outputDirectory>
</artifactItem>
</artifactItems>
<!-- other configurations here -->
</configuration>
</execution>
</executions>
</plugin>

Solution for Nexus 3.
Start by creating a raw repository in Nexus:
Then you can use it in maven or via curl for example.
Example via curl:
curl -v --user '$NEXUS_REGISTRY_USER:$NEXUS_REGISTRY_PASSWORD' --upload-file ./my_artifact.tar.gz $NEXUS_GENERAL_REPOSITORY_URL/general-raw/my-project/my_artifact.tar.gz
More information here.

Related

How can I speed up page load times while developing a Confluence macro with the Atlassian SDK? batch.js and batch.css taking up to 50 seconds

I've gone through the Confluence "Hello World" tutorial (https://developer.atlassian.com/server/framework/atlassian-sdk/create-a-confluence-hello-world-macro) and have a working macro.
But every page refresh takes up to 50 seconds, due to batch.css and batch.js probably being regenerated.
I've already tried a couple of suggestions given in different Atlassian forum threads.
Currently pom.xml looks the following
<build>
<plugins>
<plugin>
<groupId>com.atlassian.maven.plugins</groupId>
...
<extensions>true</extensions>
<configuration>
...
<enableQuickReload>true</enableQuickReload>
<!-- make AMPS faster -->
<enableDevToolbox>false</enableDevToolbox>
<enablePde>false</enablePde>
...
<properties>
<confluence.version>6.14.0</confluence.version>
<confluence.data.version>6.14.0</confluence.data.version>
<atlassian.dev.mode>false</atlassian.dev.mode>
<amps.version>8.0.2</amps.version>
...
I've tried disabling batching with a quickreload.properties file in the plugin home directory with the following content
# non-filepath directive to quickreload to turn off batching (note the qr: prefix)
qr:webresourcebatching=false
But that only moves the issue to having hundreds of single files all together taking 40-50 seconds.
And I tried using ATLAS_OPTS="-Datlassian.dev.mode=false" as environment settings.
Screenshot of Firefox developer tools network tab

How can I deploy an upgrade of my product which contains a file with version lower than the already deployed file?

I have a msi component which deploys a file MyFile.dll. I have a test machine in which my product already deployed MyFile.dll, which has version 09.99.99.99.
Now I'm writing a major upgrade which will deploy a new version of MyFile.dll with version 05.23.76.123. After execution on the test machine, MyFilee.dll is removed... I need to change or repair to correctly deploy it.
How can I force the deployment of MyFile.dll regardless of its injected version number?
PS: This is happening on our test machines only. The product we delivered to our users has files with version numbers consistent with release history.
There are several ways in Windows Installer to do this but they all have their complications. IMO I would just rebuild the same source code as the old DLL but with a newer higher version and keep it simple.
This is perfectly possible. As said here, you may specify the REINSTALLMODE property and set it to "amus" or "dmus" depending on whether you want to always overwrite files or simply overwrite files with different version:
<Wix ...>
<Product ...>
<Property Id="REINSTALLMODE" Value="amus" />
Note that you'll get this warning when compiling your installer though:
warning LGHT1076: ICE40: REINSTALLMODE is defined in the Property table. This may cause difficulties.
Downgrading a file isn't really straightforward and has issues. As pointed out earlier, you can change the component GUIDs and get this to work. However, it really depends on where your RemoveExistingProducts is sequenced. If its sequenced at a point where the older product is removed and the newer product is installed, then it might work.
There is not really a straight forward and documented way. All the available options are just hacks.
Is this just for your test environment?
If yes, then you could use REINSTALLMODE="amus" in the property table and achieve what you are looking to.
However, this is just for your testing and is not advised to be suggested to your end users.
Regards,
Kiran Hegde

Maven: How to activate inactive proxies via command line?

I have a proxy server configured for Maven via the per-user settings.xml file. The documentation snipped of the default settings.xml template suggests that it is possible to influence which of the configured proxies is used via a command-line switch:
<!-- proxies
| This is a list of proxies which can be used on this machine to connect to the network.
| Unless otherwise specified (by system property or command-line switch), the first proxy
| specification in this list marked as active will be used.
|-->
However, I have found no documentation whatsoever on how this is supposed to work. The Maven documentation has very much the same template, but no mention whatsoever of a command-line switch or else.
So, suppose I have a proxy configured, but marked as <active>false</active>, like in this example:
<proxies>
<proxy>
<id>firstProxy</id>
<active>false</active>
<protocol>http</protocol>
<host>proxy.example.invalid</host>
<port>3128</port>
</proxy>
</proxies>
As per the comment there would be some way to for instance "activate" it, possibly giving its id or something like that. Trying the "obvious" using mvn -Dproxies.proxy.firstProxy.active=true java:compile, without success.
I'm very new to Maven and cannot shake the feeling that I am barking up the wrong tree in some way. Is what I am trying to do even possible at all—if so, can anyone point me to a description on how to do it—or am I wasting my time?
As #khmarbaise suggested:
use many settings.xml, e.g put them in project root then
mvn package -s setings.A.xml
mvn package -s setings.B.xml

Using NuGet for Internal & External Dependencies in TFS

I'm currently looking at NuGet to solve my dependency problems in TFS and what I wanted to do is to host my own NuGet server that would take care of internal dependencies. I also want to use NuGet to handle my 3rd party dependencies as well. I'm trying to set up automated builds for our company and this is one roadblock I'm trying to overcome with NuGet.
So my question is how do I handle this scenario in which I have to retrieve my dependencies from different servers?
Is there a better way to handle internal dependencies? How is everyone else doing this?
Also just as a note I intend on using NuGet without committing packages to TFS. I planned on using the method outline in this article:
http://blog.davidebbo.com/2011/08/easy-way-to-set-up-nuget-to-restore.html
Glad you're looking into the no commit scenario for NuGet packages on TFS. You can take a look at my blog post on this topic where I explain the concept.
EDIT (2012/06/13): NuGetPowerTools is replaced by NuGet's built-in package restore functionality. However, same concept of changing the PackageSources element in nuget.targets still applies.
You definitely should take a look at David Fowler's NuGetPowerTools.
After installing this package, you can Enable-PackageRestore (newly installed command in Package Manager Console), which will add...
Enabling package restore will add MSBuild targets to your project files. These MSBuild targets will trigger nuget.exe in a pre-build step and fetch any packages required by your project.
No need to check-in NuGet packages in source control, all you need is the packages.config and these msbuild tasks.
To configure multiple, different package sources, you need to set some settings to be used by these MSBuild tasks. One of them is PackageSources. You can set it by editing the NuGet.targets file, which you will find in the .nuget folder once you enabled package restore.
Regarding those package sources, you could set up different internal NuGet galleries, or simply set up different network shares to be used. This is a matter of requirements and preference, so you can choose. All you need to do, is to tell your msbuild targets to use these packagesources. The order in which you define them, will be the order of lookup of packages as well.
Good luck!
Xavier
Little update on accepted answer and question:
When using TFS as a buildmachine without visual studio installed on it, you can do the following so the buildmachine automatically uses your custom packageSources (more than 1 in the same solution) without any further configuration of packagesources in your solution.
Create a machine default config by placing a NuGet.Config in the root ( C:\NuGet.Config ) by using sample from: http://docs.nuget.org/docs/reference/nuget-config-file
Comment out the line with: <add key="repositorypath" value="$\External\Packages" />
Otherwise your packages gets expanded in C:\$\External\packages\'. When commented out, the config gets chained and the right directory will be used.
Config your needed packagesource(s).
For more Info about other options (e.g. user specifc) see: http://docs.nuget.org/docs/reference/nuget-config-file (bottom of the page).

Play!framework; Compile on server only instead on client

Is it possible to compile my Play!framework application only serverside?
Since I connect a samba share to my client from the server hosting Play!, the paths differ between client and server (modules, play, libs). So eclipsify gives me the server paths on my client, instead of using the client paths. Due to this the client gives me a build error.
Solution would be;
Change the eclipsify paths per client configuration.
Only compile my app on the server (preferred since there'll be no differences in env settings).
Can anyone tell me how one of these options would be possible?
Take a look at the play-maven plugin? Using maven for dependency management means all developers will have the same pom/config file, on running a maven build jars/libs will be downloaded from the repository server (you can use your own repo server too).
why don't you install paly framework in the client? this framework is for development tasks so you should install it in your development machine (client i presume). Play framework is freely downloadable and easy to install on your client.
I've found a temp "solution" to let each client define its own path (probably will be overwritten by play eclipsify? Can I change this?).
In Eclipse I've added a variable called PLAY_HOME under Window > Preferences > Java > Build path > Classpath Variables pointing to "D:\play-1.2.2" in this case.
In the .classpath I've replaced all absolute paths:
<classpathentry kind="lib" path="/usr/local/bin/play-1.2.2/framework/lib/...jar" />
to:
<classpathentry kind="var" path="PLAY_HOME/framework/lib/...jar"/>
Still no compilation on the server/continious integration etc. but it's a working solution for now, though it could be improved (the client - server diff dependencies still exists).
Would be nice to check if the version of play matches
Would be nice to make the PLAY_HOME variable optional by defaulting it to '..' (parent dir)
Perhaps an Ant script is what you need?
If I understand your question correctly, you want to develop with multiple developers on a single instance of an application hosted on some server???
It's maybe not the answer you're looking for, but my advice: don't do it this way.
Developing directly on a server, especially with multiple developers, is one of the great anti-patterns in development. Typically, only beginners and rather non-professional developers (no offense meant) do their development this way.
Restarting the server, debugging code, working in the same files... it only ends in tears when doing this 'shared' development.
Make sure you can run the application completely isolated on each workstation. Use version control to check in changes. If two developers have been working on the same code, you at least have a chance to rectify the situation (and a rather good chance if you use e.g. Mercurial or Git). If you still want to a global server to e.g. demo changes to non-developers, just periodically check-out a snapshot from version control and deploy that to this server.