Since today I have the problem that the CF push task in bamboo will hang on the Uploading app files step. I didn't change anything besides some environment variables and of course a bit of code. The log looks like this:
Creating/updating app App-X...
Uploading App-X... Uploading app files
from: /home/bamboo/bamboo-agent-home/xml-data/build-dir/App-X-JOB1/
This will run forever without the Application Cloud getting any updates.
Are there any good ways to debug CF push tasks?
You should try to debug your deployment as close as possible to the way you execute it on bamboo. If you do a cf push, the cf cli automatically tails the staging logs for you, so if you don't see any output during staging it's unlikely that cf logs will tell you anything more.
Since the last thing you're seeing from the log snippet posted above is an "uploading" statement, I would check these things:
use a more recent version of the cf cli, they recently added an upload progress bar which might be useful
check if you have either a .gitignore or .cfignore file in the dir you're pushing from so that you don't push any files you don't need (docs)
hanging pushes may be a plain malfunction of the CF platform you're using, e.g. due to inavailability of CFs internal blob store, no more cells available for staging etc., there's tons of possibilites here
Disclaimer: I'm a co-founder at Meshcloud and we provide public and private (also on-site/hybrid) managed Cloud Foundry hosting in the EU.
You can get the full trace log of a push by using cf push -v, which might give you a hint of where (and possibly why) it is stuck.
Other useful debugging tools are cf logs / cf logs --recent and cf events, but if the push itself simply hangs (rather than failing), they might not produce anything at all. Have you tried waiting for the timeout?
Also, please make sure you're using the latest cf cli version.
Related
Why am I getting this error message for my apps deployed with Github at Heroku?
There is an issue with the GitHub token for this app. Disconnect and reconnect to restore functionality
We had the same issue before, it happened with private repositories and disconnecting/reconnecting from time to time seems to do the trick but after a while we started to grow and we needed more automation.
We looked into all kinds of CI/CD tools like Codeship, CircleCi, etc. Ended up choosing DeployBot and stuck with it because it works really well for us and fitted our needs best.
Either way these tools can be lifesavers for the team no matter which one you end up using
I am trying to push app to cloud foundry from Jenkins. And it complains of this :
org.cloudfoundry.client.v2.ClientV2Exception: CF-AppResourcesFileModeInvalid(160003): The resource file mode is invalid: File mode '444' with path '.git/objects/pack/pack-af4cdbe6faac9d245253dafc1ecae06dc3fa5816.pack' is invalid. Minimum file mode is '0600'
at org.cloudfoundry.util.JobUtils.getError(JobUtils.java:81)
at reactor.core.publisher.MonoThenMap$ThenMapMain.onNext(MonoThenMap.java:120)
at reactor.core.publisher.FluxFilter$FilterSubscriber.onNext(FluxFilter.java:96)
I have tried:
1.Doing chmod 666 ( and even 777) before the build step.
2.Adding these in my .cfignore:
scripts
.git/
.git/objects/pack/*
plugins/**/*
/.bundle
tmp/
.pack
Wiping off workspace in jenkins and app on cf before another try.
Nothing works.
One interesting thing is after a fresh commit to .cfignore (editing a line and pushing to git) , the first build in jenkins works. Subsequent build fails.
Any help?
Thanks!
The root issue is that the Cloud Foundry Java Client pushes the entire content of the configured path to the server. The Cloud Foundry CLI automatically filters out source control directories (and possibly all hidden directories) this filtering out the most common places to see < 0600, but that’s not actually documented anywhere so we don’t match that behavior. I’ve chatted with the lead of the CLI and they’ll document that behavior at which point we’ll implement what they spec.
The .cfignore file does’t work in the client yet either, but once that is properly spec’d by the CLI team, we’d work that issue as well.
I and a friend want to work, remotely, on a Moodle website. I have got the application installed locally, but I'm not sure what I need to commit into our git repository and what I shouldn't.
I'm looking for a workflow that allows someone to clone the code, run a CLI command or something, and be up and running. Since our machines are development machines, I"m trying to lower the number of steps (ie, we can both just share the same configuration file).
You may want to take a look at Moosh ( https://github.com/tmuras/moosh ).
This can script-up a lot of the things I think you want to do (set settings, create courses + users, etc).
You could then create a command-line script that would call Moosh and prepare a lot of the settings that you will eventually want on your live site (what it cannot do is take the settings from an existing site and apply them automatically to a new site).
I'm learning FluentMigrator. The thing that I like about FM is that it supports the idea of Forward and Back for migrations (aka Up/Down). I'm finding that it's not ideal about this; there are some holes. Still, it's good.
This leads me to wonder if there are any deployment tools (nant, msbuild or other) that support this idea of rolling forward and back. The scenario that I'm using it in is the deployment of a web app with a related database.
Ideally I'd like to set up my deployment so that, should any part of it fail, it will revert to the previous known working configuration. With FM, this is pretty easy to do (but there are rough spots), so that covers the db. How about the files that make up the web app? Do any deploy tools have support for this?
Deploying to a Windows Server. Assume that I can't make any changes to the server.
I don't know of any Microsoft-centric, automated provisioning/deployment tools like Capistrano. Here are some tools I've heard of, but never used:
MSDeploy, for deploying web application.
Microsoft Deployment Services, for managing operating system configuration
Microsoft's System Center Configuration Manager
BladeLogic
HP's Operations Center
Up until about three months ago, we did our deployment/provisioning using custom MSBuild scripts. After a server is provisioned, deploys happen automatically using Robocopy to copy files to a share on the application server, updating changed application binaries and markup files. We've never had a need to rollback any of our deployments, but since our scripts are custom, we could write the logic if we needed to.
MSBuild is a terrible deployment/provisioning language. For the past three months, we've been writing all new scripts in, and porting existing ones to, PowerShell. It is wonderful. With version 2, there is support for running commands on remote servers, like SSH. We haven't used that functionality yet, but I'm looking forward to pushing setup scripts to remote server to provision and deploy at the same time.
We have been using Git to do our deploys for the last 6 months.
Here is the whole process:
CI server build the project
CI server checks it in to a local git repository
CI server pushes the changes to the centralised git repository
User creates an empty repository on the live server
User adds the central git repository to the remotes
User pulls the latest version over https (no need to open any ports)
It is a lot to setup in the beginning but once setup it works great. Deploys take seconds as only changed files get copied.
Another great thing about this method is that git keeps history of changes so rolling back is pretty simple. You can also roll back a few revisions and it's done straight on the live server. If something goes wrong reverting is super fast.
Also you can save some time if you use a hosted git service (github) for your central repository.
This is a very brief description but I can give you more info if you want.
Of course! My favorite is Capistrano. This was originally built for Ruby but I've found that it works just as well for other languages.
https://github.com/capistrano/capistrano
We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.