Indexed repositories within Artifactory - eclipse

Apologies if this is obvious to everyone else...
I've deployed the Artifactory war file within tomcat6 and started the server: all looks great.
Now, I want to navigate around the preconfigured repositories, for instance repro1-cache. However, it appears it's empty, there are no tree elements to expand. This appears to be the story for all the listed repositories. Consequently I cant run any searches for particular artifacts.
Am I missing a stage here? Do I need to force it to index itself? What should I be expecting once I've deployed the war file and when I first log in?
I guess my expectation was that once having deployed the war file, Artifactory would automatically index the remote repositories. I'd then configure Eclipse to point at the Artifactory install, so that it can index the repositories within the IDE. Then when I declare a new dependency, Artifactory would download and cache it locally, allowing for faster resolution next time. Is this a valid expectation?
Any feedback will be most appreciated, particularly any pointers to user documentation that covers this that I've overlooked.

Your repositories are all empty because they aren't populated.
Once you deploy artifacts to the local repositories or request artifacts from remote repositories, you'll see them in the browser.
If you'd like to browse through artifacts not yet cached in remote repositories, you can use Artifactory's simple browser (see the Remote Browsing section).
Maven Indexes can be created and retrieved manually or as a recurring task; the Indexer can be configured in Artifactory's admin UI in Admin->Services->Indexer (also see the Indexer's wiki page).

Related

Automate the process of uploading a new build to a website. i.e. npm run build -> cpanel upload

I am managing a mostly static site through GoDaddy.
The site is a React single page application, that is still currently under development, and that occasionally needs content updating. The project folder is hosted as a public git repository.
My goal is to be able to automate the process of updating the site. Currently I need to:
npm run build
navigate to the build folder in windows file explorer
navigate to the public html folder in cpanel, in my web browser
delete the current build files
upload the contents of the build files into cpanel, folder by folder (cpanel will not allow me to upload subfolders)
I have looked through countless forum posts, and blogs, etc to find a way to automate this, but I always end up doing it manually.
You need to investigate using continuous deployment (CD/CI) and using a different hosting setup. Unfortunately, the type of platform that you use (with CPanel) is limiting and is not really oriented at your use case.
However, CPanel does have an option to use Git version control to manage the files and folders in your account. Go into this option and say "Clone repository", where you'll have to link a repo and say where it should install. Note: It is possible that your hosting provider has disabled this feature.
I suspect that this CPanel feature does not automatically pull in changes when you update the repo, so you would probably still need to manually clone the repo again when you make changes (which is still easier than copying files over). Also note that any data you store may be removed when cloning again.

What grunt files to upload to repo vs files to upload when deploying site to production

So, I have a webapp I am creating using the 3 muskateers yeoman, grunt and bower.
My questions are:
What is best practice when it comes to uploading my webapp into a git/mercurial repo? Do I include the entire project? What about directories like 'node_modules' or 'test', etc?
Also, when deploying to live production site: Will my 'dist' folder be what I should be uploading?
With research yielding no results (I could be searching the wrong things?).. I'm a bit new to this process so any feedback is greatly appreciated. Thanks!
You should always commit all of your yeoman, grunt, and bower config files.
There are two schools of thought on committing the output they produce or dependencies they download:
One is, you should upload everything needed for another user to deploy the web app after cloning the repository, without performing any additional operations. The idea is, dependencies may not exist anymore, network connections might be down, etc.
Another is, keep the repository small and don't commit node_modules, etc, since they can be downloaded by the user.
As far as the dist folder goes, yes you'll be uploading it to your server, as it contains all of your minified files. Whether or not you want to commit it to the repository is a separate question. You might let the user build every time, assuming they can get all the dependencies one way or another (from above choice). Or you might want to commit it to tag it with a release version along with your source code.
There's some more discussion on this here: http://addyosmani.com/blog/checking-in-front-end-dependencies/

Can I download artifacts built by BuildHive?

I have started using the free Jenkins build service on BuildHive for one of my GitHub projects. This is also my first try doing anything with Maven. I have succeeded in building my project using this script on BuildHive:
cd base_dir
mvn package
The build log shows that the resulting JAR has been built. Now I would like to offer the JAR to my project's users as a download artifact because GitHub has discontinued the feature of manually uploading binaries in a separate download section.
Is there any way I can download an artifact, referencing it by a URL? If so, how do I construct the URL, knowing only the artifact's local path from the build log?
Alternatively, is there a way in which I can push the artifact to another place by adding a command to my build shell script after mvn package? I was thinking of something like a curl or ftpput commmand.
The best thing I was able to come up with as a quick workaround was to upload the artifacts in question to my FTP server via curl, as suggested by my original question. It works, but the downside are the FTP credentials in the build public log. I have counterbalanced that by a shell script on my DSL router which checks for FTP storage abuse every few minutes.
As an alternative I found that after creating a free CloudBees account for my little open source project, I got my own Jenkins build configuration as well as my own artifact repository where to deploy my build artifacts. This is much more elegant and does not involve posting any FTP credentials to a public server.
I am still open for BuildHive-only solutions if anyone has a smart idea. :-)

Performing a partial export from Nexus

I haven't worked much with Nexus before, so I'm still trying to work out a product lifecycle that works for us.
I want to be able to export certain sets of repository artifacts from nexus into another nexus container. It looks like the only way to do this, as of now, is to pull artifacts as a set of dependency builds and then deploy them to the new repository. This may be what we have to go with, I was just looking for a better approach.
It looks like mirroring or proxying won't give us the fine grain control of export that I need.
I see that I can just copy artifacts out of nexus, but I'm not sure how to tell the new nexus container that it is supposed to manage those files.
What I want to do is be able to put a set of artifacts onto a DVD that can be run as a localized nexus instance for the purpose of installing software at a customer site. It appears that for any customer that will allow a connection back to us for software installs, they can be treated with the same install setup we use for QA. The reason to use a nexus deploy instead of an installer is because we need to be able to roll back a "patch install", as each path/install set would be maintained as a release version. Right now this is all done in custom code, since there doesn't seem to be an installer that handles rollbacks (with backups) after the install has completed.
This would be a non-standard usecase for Nexus and it strikes me that Nexus would be overkill for your requirements. Any web server can act as a Maven repository, once the files are available in the correct format.
For example, why not burn the desired subset of repository files onto DVD and include a copy of Jetty? Jetty which can be launched from the DVD and serve up the local content over HTTP.

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.