Why should I gitignore the .elastbeanstalk folder? - deployment

I have an Elastic Beanstalk Python Application.
So I already made my build script where I do generate a deploy.zip file that I do deploy into EB. It does work just as it suppose to.
So after building my script for building an artifact (my deploy.zip) that is compatible with EB, I began to work on configuring EB cli for using eb deploy in my gitlab-ci, so it will deploy to EB homologation server when there be a commit on development branch and into EB production when get into master. ( right now I'm just working on homologation server).
So I did read the documentation and notice that eb would build the artifactory by itself. But as I already had my own build script, this Deploying an Artifact Instead of the Project Folder so I made a .elasticbeanstalk folder and a config.yml in it with the follow configuration.
deploy:
artifact: deploy.zip
So I did a eb init, did set everything (region, id, key and selected my existing project.
When I did eb deploy it worked just as it suppose to. So I did suspect that eb was making de artifactory by itself, so I checked the config file and notice that eb had add a bunch of other config into the file, my deploy config was there, so for another test I did deleted my deploy.zip so when I did a eb deploy it failed just as it suppose to.
Until this point everything was running just as I was planning to, so I did a git status for checking before adding the .elasticbeanstalk folder into git. For my surprise the folder was not been listed and the .gitignore file was changed. When checking the .gitignore it had the .elasticbeanstalk in it.
So it got me alert about if I should add this folder into the git, as de default behavior of eb is to add it into ignore.
I was planning to commit the eb configurations and set the keys using environment variables as is says in Configuration Settings and Precedence session.
I've tried to run eb deploy without the configurations just passing env vars before the command, something like AWSAccessKeyId=<access_key> AWSSecretKey=<secret_key> eb deploy, but it says that I should run eb init before it.
So shouldn't I track my eb configuration in my git repo? If not, how should I procede for a CI deploy with EB ?

.elasticbeanstalk is a place where eb cli stores it's configuration. As you wrote - one file config.yml is there, so you can create it by yourself. When you call eb init your version is overwritten, by the command.
If you have only one environment, or security is not an issue - then having that configuration in your repository is good idea. You are exposing some details like ssh key name, so for security reasons not all should see it. By all I men public repo. Perhaps that is why they put it to .gitignore.
Note that usually you have test, pre-prod, prod, environments - so configuration is needed anyway. Next step is a separate configuration repository for all that devopsish stuff, and there you can have directory, or branch per environment.
I agree with you that it looks strange - .ebextensions are not secured that way...
If I miss something - post question, I'll append clarification to this answer.

Related

Building multiple Gradle projects in Jenkins with AWS CodePipeline

I have a Gradle project that consists of a master project and 2 others that included using includeFlat directive. Each of these 3 projects has its own repo on GitHub. To build it I checkout all 3 projects into a common top folder then cd into the master project and run gradle build. And it works great!
Now I need to deploy the resulting app to AWS EB (Elastic Beanstalk) which is also works great when I produce the artifact locally and then deploy it manually. I want to automate the process so I'm trying to set it up using CodePipelines + Jenkins as described in this document adjusted for Gradle.
The problem is that if I specify 3 Sources in the pipe I end up with my projects extracted on top of each other creating a mess in Jenkins workspace. I need to somehow configure each project to be output to its own directory within Jenkins workspace and I just don't see a way to do it (at least in UI)
Then, of course even if I achieve what I want I need somehow to cd into the master directory to run gradle build and again I'm not sure how to do that
P.S. Great suggestions from #Phil but unfortunately is seems that CodePipeline does not currently support Git submodules or subtrees
I would start common build, when changes happened on any of 3 repos. With say 5 minutes delay, to have single build, even if changes are introduced to more then one repo.
I can't see good way to deal with deployment in other way than using eb deploy... old way... Please install aws tools at your jenkins machine. Create deployment job triggered on successful build. And put bash script doing deployment there. Please put more details about your deployment, that way I can help with deployment script.

Cloud foundry plugin throws error while pushing from Jenkins CF-AppResourcesFileModeInvalid(160003)

I am trying to push app to cloud foundry from Jenkins. And it complains of this :
org.cloudfoundry.client.v2.ClientV2Exception: CF-AppResourcesFileModeInvalid(160003): The resource file mode is invalid: File mode '444' with path '.git/objects/pack/pack-af4cdbe6faac9d245253dafc1ecae06dc3fa5816.pack' is invalid. Minimum file mode is '0600'
at org.cloudfoundry.util.JobUtils.getError(JobUtils.java:81)
at reactor.core.publisher.MonoThenMap$ThenMapMain.onNext(MonoThenMap.java:120)
at reactor.core.publisher.FluxFilter$FilterSubscriber.onNext(FluxFilter.java:96)
I have tried:
1.Doing chmod 666 ( and even 777) before the build step.
2.Adding these in my .cfignore:
scripts
.git/
.git/objects/pack/*
plugins/**/*
/.bundle
tmp/
.pack
Wiping off workspace in jenkins and app on cf before another try.
Nothing works.
One interesting thing is after a fresh commit to .cfignore (editing a line and pushing to git) , the first build in jenkins works. Subsequent build fails.
Any help?
Thanks!
The root issue is that the Cloud Foundry Java Client pushes the entire content of the configured path to the server. The Cloud Foundry CLI automatically filters out source control directories (and possibly all hidden directories) this filtering out the most common places to see < 0600, but that’s not actually documented anywhere so we don’t match that behavior. I’ve chatted with the lead of the CLI and they’ll document that behavior at which point we’ll implement what they spec.
The .cfignore file does’t work in the client yet either, but once that is properly spec’d by the CLI team, we’d work that issue as well.

Is it possible to have ansible use a "remote" playbook for git-based continuous deployment?

I need to manage a few servers that run code that is currently being deployed there as a couple of git repositories. I would like to be able to store in the project's repository the parts (if not all) of the playbook that is relevant for the repository. For example, the list of package dependencies, virtualenv requirements, configuration templates. This will also allow those to change in a per branch/commit way. Meaning I can make sure that if I need to deploy a specific branch/commit, playbook that is correct for that commit is being used, if, say, the configuration template being used changed.
It seems like the only solution is to checkout the git repository locally. Is it possible in ansible to tell it to run a remote play book (from the git repository that is being checked out on the server)? I was thinking of having ansible run a ansible using a local connection on the remote host, I haven't tried it to see if this will actually work out.
How do people manage to use ansible for continuous deployment based on git without some mechanisms for running a remote playbook?
Take a look at ansible-pull.
It pulls the repo and executes playbook.

What server.xml is for in Java DB Web Starter GIT code?

I've created a liberty bluemix project. Then bluemix created the GIT project. I've downloaded it in eclipse and now I want to enable more features.
There's a server.xml there
but no matter what features I add there, bluemix logs says I am still using the default ones.
I am just pushing the changes to GIT (so jazz will push them to bluemix)
What am I doing wrong?
From my understanding the server.xml from the starter is for your local Liberty runtime that you can also fire up from within the maven plugin. If you want to make changes to your bluemix Liberty feature set you can do so by setting cf environment variables.
See my recent blogpost on how I did this.
https://bluemixdev.wordpress.com/2016/02/07/bootstrap-a-websphere-liberty-webapp/
I added the following to the build script in my deployment pipeline.
cf set-env blueair-web JBP_CONFIG_LIBERTY “app_archive: {features: [servlet-3.1]}”
cf push “${CF_APP}”
Alternatively you can set the liberty feature set within your manifest, see this blogpost on how to do so: https://bluemixdev.wordpress.com/2016/02/21/specify-liberty-app-featureset-in-manifest/
If all you're trying to do is update the feature list, then setting the JBP_CONFIG_LIBERTY is the easiest way.
But if you're looking to provide more config in the server.xml then you'll need to provide a server package.
For example, for this case:
I can either:
I can issue a cf push myBluemixApp directly from the "videoServer" directory.
Or, package the server using the wlp/binserver package videoServer --include=usr command and then push the resulting zip file cf push myBluemixApp -p wlp/usr/servers/videoServer/videoServer.zip https://developer.ibm.com/bluemix/2015/01/06/modify-liberty-server-xml-configurations-ibm-bluemix/
Or, manually or using your build, create a wlp dir structure keeping only the files you want to upload as I've done in the deploy directory here: https://hub.jazz.net/project/rvennam/Microservices_Shipping/overview You can then push that directory as I'm doing (see manifest.yml). This will work with jazz/DevOps Services.
Packaging the server.xml within a war file is not the correct way.

How to configure a Thoughtworks:GO task to deploy a repo?

I'm trying to figure out how to create a task (custom-command, not ant/rake, etc) to perform a deployment of a git-repo to some server/target (in this case Heroku). If I were to do this manually, it's just git push heroku master.
I've created a basic pipeline/stage/job/task (custom-command, in this case a Python script), and single agent. The pipeline has a material (git repo, with a name). Inside the script, I'm printing out os.environ.items() - it has several variables, including the SHA of the latest commit - but no URL for the actual repository.
So how is the agent (or task) supposed to know what repository to deploy?
The pipeline knows the material name, and I've tried passing in an Environment Variable such as ${materialName} (which didn't work). I could hard-code a URL in the task, but that's not a real solution.
Thoughtworks:GO's documentation is shiny, but a bit sparse in the details. I'd have thought something this basic would be well documented, but if so, I haven't found it so far.
When a task runs on an agent, it clones the repository specified in the material (config). The .git/config wouldn't have remote Heroku url and as such git push heroku master wouldn't work.
You would need to add Heroku remote url before you can perform a deployment of your git-repo to Heroku.
git remote add heroku git#heroku.com:project.git
where project is the name of your Heroku project. This is required to be done only once unless you perform a clean working directory every time (in Stage Settings which removes all files/directories in the working directory on the agent, you can see this option from the UI as well: Admin -> Piplelines -> Stages -> Stage Settings Tab) in which case you may have to add the remote url via a task before you run the task to deploy.
Once you've done so, you should be able to use the heroku xxxx commands (assuming you have the Heroku Toolbelt installed on the agent machine you are using for deploying), and should be able to push to Heroku as usual via git push heroku master.