I have been working for some time on a project with a microservices architecture where each service has its own environment variables which are handled with a .env file for each service/repository.
A great part of these variables are related to other services IPs and external resources keys which are different in each environment: Development, Staging, and Production so the .env is not a simple one.
Our development pace is fast and most of the time these variables change with new features and or changes implemented by teammates working on issues related to that service. This causes that almost every time others want to work with a service they get blocked and have to update the .env file before. Therefore we end up requesting and sharing .env files with each other all the time and there is a lack of a "Source of truth" for all the .env files.
I was wondering if someone else has had this problem or a similar one before and what approaches has followed to solve it or improve it.
Is there any application or framework for sharing and managing .env files in a team in an automated way?
Thanks in advance!
EDIT
Just to be clear, these are not being added to source control and they are properly handled on CI/CD.
I was talking more about local development, setting up services locally, and keeping the .env local files up to date in an easy way.
As a summary of all the feedback provided by some coworkers and the community in both: r/SoftwareEngineering and r/softwaredevelopment (Thank you all for it) some of the most useful resources are:
This post about Common Anti-Patterns when Managing Passwords and Application Secrets: https://blog.envkey.com/managing-passwords-and-secrets-common-anti-patterns-2d5d2ab8e8ca
This one with Secure Strategies For Managing Passwords, API Keys, and Other Secrets. https://blog.envkey.com/secure-strategies-for-managing-passwords-api-keys-and-other-secrets-4cc3b2758c02
This application to share API keys with your team self-hosting and managing them. https://envault.dev/
And I want to quote what u/nickthemagicman commented which I think is an important point to take in mind:
But due to the fact that ya'll are still using .env files for this long and it's been this chaotic and no one has fixed this by now, it sounds like your biggest hurdle is going to be to get the team buy in, since it sounds like there's no centralized management either.
Not sure what stack you're using but we're solving this with Infisical.
It provides a source of truth for your environment variables and supports different environments (development, staging, and production). Your team can either automatically inject those variables into your local process or manually pull back environment variables to update your .env file — whichever you're comfortable with most; it's end-to-end encrypted.
We ran into the same issues you're outlining and are finally solving them.
Related
I would like to ask what people use to provision an ephemeral preview environment in AWS EKS for your service under test. Also in addition, I am curious to know how you provision any dependent services (such as Database).
E.g. I am working on a back-end service and would like to deploy an isolated ephemeral version of this service packaged from my feature branch, including the database. Furthermore, I would also like copy of a front-end service in my isolated environment to test my back-end.
Any thoughts would be appreciated
Thanks
Sachin
You can roll your own solution: by wiring together your own CI/CD (Jenkins, CircleCI, BuildKite, Github Actions, etc) solution to trigger building and deploying of a preview environment by tying in to webhooks on your source repository. This would have to include your building of the modified code, then deploying that code to some staging environments, then of course seeding those environments with some type of data.
There is a bit of nuance to getting this right. You should check out https://ephemeralenvironments.io/ which is a good template of what needs to go in to these environments.
A lot of other folks use services that provide this as a SaaS platform, Shipyard.build, Release, and Velocity.tech are a few of your options.
Disclaimer: I'm on the Operations team at Shipyard
Hope this helps!
I would like to raise statements regarding how properly deploy projects with multiple dependencies on production server.
E.g. my project depends on node (npm), ruby (for sass), composer, gulp etc. - things which are related to development process.
So, maybe a good idea is to avoid all those things on production server, and create a separate repository to hold there project in 'production-ready' state with all dependencies (e.g. vendor/ directory with composer deps), and push it directly to production.
According to this answer it looks like I should build everything on dev or local environment and then copy files to production, which might be tedious and maybe it would be good to hold everything in separate repository.
Or, are there already some best practices regarding this? Could somebody help me with the decision?
Thanks!
I recently joined a company as Release Engineer where a large number of development teams develop numerous services, applications, web-apps in various languages with various inter-dependencies among them.
I am trying to find a way to simplify and preferably automate releases. Currently the release team is doing the following to "release" the software:
CURRENT PROCESS OF RELEASE
Diff the latest revision from SCM between QA and INTEGRATION branches.
Manually copy/paste "relevant" changes between those branches.
Copy the latest binaries to the right location (this is automated using a .cmd script).
Restart any services
MY QUESTION
I am hoping to avoid steps 1. and 2. altogether (obviously), but am running into issues where differences between the environments is causing the config files to be different for different environments (e.g. QA vs. INTEGRATION). Here is a sample:
IN THE QA ENVIRONMENT:
<setting name="ServiceUri" serializeAs="String">
<value>https://servicepoint.QA.domain.net/</value>
</setting>
IN THE INTEGRATION ENVIRONMENT:
<setting name="ServiceUri" serializeAs="String">
<value>https://servicepoint.integration.domain.net/</value>
</setting>
If you look closely then the only difference between the two <setting> tags above is the URL in the <value> tag. This is because the QA and INTEGRATION environments are in different data-centers and are ever so slightly not in sync (with them growing apart as development gets faster/better/stronger). Changes such as this where the URL/endpoint is different are TO BE IGNORED during "release" (i.e. these are not "relevant" changes to merge from QA to INTEGRATION).
Even in a regular release (about once a week) I have to deal with a dozen config files changes that have to released from QA to integration and I have to manually go through each config file and copy/paste non URL-related changes between the files. I can't simply take an entire package that the CI tool spits out from QA (or after QA), since the URL/endpoints are different.
Since there are multiple programming languages in use, the config file example above could be C#, C++ or Java. So am hoping any solution would be language agnostic.
SUMMARY OF ENVIRONMENTS/PROGRAMMING LANGUAGES/OS/ETC.
Multiple programming languages - C#, C++, Java, Ruby. Management is aware of this as one of the problems, since Release team is has to be king-of-all-trades and is addressing this.
Multiple OS - Windows 2003/2008/2012, CentOS, Red Hat, HP-UX. Management is addressing this too - starting to consolidate and limit to Windows 2012 and CentOS.
SCM - Perforce, TFS. Management is trying to move everyone to a single tool (likely TFS)
CI is being advocated, though not mandatory - Management is pushing change through but is taking time.
I have given example of QA and INTEGRATION, but in reality there is QA (managed by developers+testers), INTEGRATION (managed by my team), STABLE (releases to STABLE by my team but supported by Production Ops), PRODUCTION (supported by Production Ops). These are the official environments - others are currently unofficial, but devs or test teams have a few more. I would eventually want to start standardizing/consolidating these unofficial envs too, since devs+tests should not have to worry about doing this kind of stuff.
There is a lot of work being done to standardize how the binaries are being deployed using tools like DeployIT (http://www.xebialabs.com/products) which may provide some way to simplify these config changes.
The devs teams are agile and release often, but that just means more work diffing config files.
SOLUTIONS SUGGESTED BY TEAM MEMBERS:
Current mind-set is to use a LoadBalancer and standardize names across different environments, but I am not sure if "a process" such as this is the right solution. There must be a better way that can start with how devs write configs to how release environments meet dependencies.
Alternatively some team members are working on install-scripts (InstallShield / MSI) to automate find/replace or URLs/enpoints between envs. I am hoping this is not the solution, but it is doable.
If I have missed anything or should provide more information, please let me know.
Thanks
[Update]
References:
Managing complex Web.Config files between deployment environments - C# web.config specific, though a very good start.
http://www.hanselman.com/blog/ManagingMultipleConfigurationFileEnvironmentsWithPreBuildEvents.aspx - OK, though as a first look, this seems rather rudimentary, that may break easily.
Generally the problem isn't too difficult - you need branches for each of the environments and CI build setup for them. So a merge to the QA branch would trigger a build of that code and a custom deployment to QA. Simple.
Now managing multiple config files isn;t quite so easy (unless you have 1 for each environment, in which case you just call them Int.config, QA.config etc, store them all in the SCM, and pick the appropriate one to use in each branch's deployment script - eg, when the build for QA runs, it picks qa.config and copies it to the correct location and renames it to the correct name)(incidentally, this is the approach I tend to use as its very simple).
If you have multiple configs you need to use, then its always going to be a manual process - but you can help yourself by copying all the relevant configs to a build staging area that an admin will use to perform the deployment. Its a good first step in that the build they have in a staging directory will be the correct one for them, they just have to choose which config to use either during (eg as an option in the installer) or by manually copying the appropriate config over.
I would not try to manage some automated way of taking a single config file in source control and re-writing it with different data in the build, or pre-deploy steps. That way lies madness, and a lot of continual hassle trying to maintain the data and the tooling. Keep separate configs in place and make sure the devs know to update all of them when they make a change. (Or, you can hold 1 config in the SCM tree and make sure they know that merging their changes must not overwrite any existing modifications - multiple configs is easier)
I agree with #gbjbaanb. Have one config for each environment. Get your developers to write apps that read their properties (including their URLs) from config files and commit config files for each environment. Not only does this help you with deployment, but config files under revision control provides reproducibility, full transparency, and an audit trail of your environment specific settings.
Personally, I prefer to create a single deployable package that works on any environment by including all of the environment configs (even the ones you aren't using). You can then have some deployment automation that figures out which config files the apps should use and sets that up appropriately.
Thanks to #gman and #gbjbaanb for the the answers (https://stackoverflow.com/a/16310735/143189, https://stackoverflow.com/a/16246598/143189), but I felt that they didn't help me solve the underlying problem that I am facing, and restating just to make clear.
The code seems very aware of the environment in which they run. How to write environment-agnostic code?
The suggestions in the answers above are to store 1 config file for each environment (environment-config). This is possible, but any addition/deletion/edit of non-environment settings will have to be ported over to each environment-config.
After some study, I wonder if the following would work better?
Keep the config file's structure consistent/standardized e.g. XML. Try to keep the environment-specific endpoints in this config-file but store them in a way that allows easy access to the specific individual nodes/settings (e.g. using XPath).
When deploying to a specific environment, then your deployment tool should be able to parse (e.g. using XPath) and update the environment-specific endpoint to the value for the specific environment to which you are deploying.
The above is not a unique idea. There are some existing implementations that tackle the above solution already:
http://www.iis.net/learn/develop/windows-web-application-gallery/reference-for-the-web-application-package & http://www.iis.net/learn/publish/using-web-deploy/web-deploy-parameterization (WebDeploy)
http://docs.xebialabs.com/releases/3.9/deployit/packagingmanual.html#using-placeholders-in-ci-properties (DeployIt)
Home-spun solutions using XPath find and replace.
In short, while there are programming-language-specific solutions, and programming-language-agnostic solutions, I guess the big downfall is that Release Management needs to be considered during development too, else it will cause deployment headaches - I don't like that, since it sounds like "development should be aware of what tests will be designed". Is there a need AND a way to avoid this, is the big questions.
I'm working through the process of creating a "deployment pipeline" for a web application at the moment and am sifting my way through similar problems. Your environment sounds more complicated than ours, but I've got some thoughts.
First, read this book, I'm 2/3 the way through it and it's answering every question I ever had about software delivery, and many that I never thought to ask: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912/ref=sr_1_1?s=books&ie=UTF8&qid=1371099379&sr=1-1
Version Control Systems are your best friend. Absolutely everything required to build a deployable package should be retrievable from your VCS.
Use a Continuous Integration server, we use TeamCity and are pretty happy with it so far.
The CI server builds packages that are totally agnostic to the eventual target environment. We still have a lot of code that "knows" about the target environments, which of course means that if we add a new environment, we have to modify all such code to make sure it will cope and then re-test it to make sure we didn't break anything in the process. I now see that this is error-prone and completely avoidable.
Tools like Visual Studio support config file transformation, which we looked at briefly but quickly realized that it depends on environment-specific config files being prepared with the code, by the developers in order to be added to the package. Instead, break out any settings that are specific to a particular environment into their own config mechanism (e.g. another xml file) and have your deployment tool apply this to the package as it deploys. Keep these files in VCS, but use a separate repository so that revisions to config don't trigger new builds and cause the build number to get falsely inflated.
This way, your environment-specific config files only contain things that change on a per-environment basis, and only if that environment needs something different to the default. Contrary to #gbjbaanb's recommendation, we are planning to do whatever is necessary to keep the package "pure" and the environment-specific config separate, even if it requires custom scripting etc. so I guess we're heading down the path of madness. :-)
For us, Powershell, XML and Web Deploy parameterization will be instrumental.
I'm also planning to be quite aggressive about refactoring the config files so that the same information isn't repeated several times in various places.
Good luck!
I have finished developing the core of a web application I have been working on. Since I was the only developer I just developed locally (lamp stack) without using version control (probably stupid but anyway..). Now that it is getting close to production ready, I have a couple other developers working with me so I set up a repository for my code.
This is my question: I still want to be able to test any changes locally first before posting to production. How do I manage this with a repository without having to maintain 2 versions of my code (that I have to synch up manually)? For one, the production code has a few differences here in there (such as database constants etc.). I'd like to be able to change my code in my local repository, test it on my local apache server, then check the code directly into production (is this even possible using eclipse)?
I am using eclipse and subversion (php code). I know I asked many questions but hopefully you get the idea of what I am trying to do...and I assume its rather common. Thanks.
In addition to the excellent answers you've gotten already, I'd like to emphasize that if there are differences between your dev and production code, you're adding risk. You should be using the same, well-tested code in both locations; any difference between the environments should be expressed in configuration files. Any configuration files in source control should be samples only; your deployment script should not push new configuration files to production.
This, in combination with tagged releases and a staging environment that mimics production, should help you promote your code smoothly to the production environment.
I would suggest a few things
Use tags/branches in SVN. When the code is production ready, tag it with a unique name.
Set up a staging area for integration testing. After a release is tagged for staging, yank it from your vcs and copy it into the staging area. This can be as simple as a different directory tree or a second install of your server.
Put constants into separate files that can copied/merged over into the staging and deployment directories
Test the staged version against dev to insure everything works as it did in your dev environment. I would point staging to production databases when I am sure it is working and ready to be promoted. Test that it also works against prod.
Once everything works in staging, update the production copy. I would suggest you create a clean deployment directory then copy that entire deployment over to the production server after copying/merging config settings.
This was my approach is dealing with perl/cgi many years ago and it worked pretty well. SVN handles tags/branching much better so it should be easier to deal with. We had very few production problems once we started staging the files before pushing to prod.
It sounds like you haven't created any branches or tags, and probably have a "trunk" that isn't labeled as such. Best practices would dictate that you have a trunk for the current stable code, branches that you develop against, and tags that are actually used on the production site. There is a short description and diagram on Wikipedia.
Of course, that's just best practice. Your project sounds small enough that you could get away with splitting your code into a development/ directory and a production/ directory in your code repository. Checkin code to the development directory, and once a change is fully tested, merge it into the production directory.
Whether you do it the right way or the easy way, it's important to do something to separate your development code from your production code. As you add more developers, it will be increasingly unlikely that the development code base is stable because people are checking in code that hasn't been fully tested, isn't complete, whatever. Spending a little extra time on managing two branches of code will save you a lot of headaches later on.
currently my work-flow is as follows:
Locally on a machine I maintain a git repo on each website I am working on, when the time comes to publish something I compress the folder and upload this single file to the production server via ssh then I decompress, test the changes a move the changes to the live folder and I get rid of the .git folder.
I was wondering if the use of a git repo on the live server was a good idea, seems to be at first but it can be problematic if a change doesn't look the same on on the production server in comparison to the local development machine... this could start a fire...
What about creating a bare repo on some folder on production server then clone from there to the public folder thus pushing updates from local machine to the bare repo and pulling from the bare on the public folder of the production server... may anyone plese provide some feedback.
Later I read about capistrano http://capify.org but I have no experience w/ this software...
In your experience what is the best practice/methodology to accomplish a website deployment/updates?
Thanks in advance and for your feedback.
I don't think that our method can be called best practice, but it has served us well.
We have several large databases for our application (20gb+), so maintaining local copies on each developers computer has never really been an option, and even though we don't develop against the live database, we do need to do the development against a database that is as close to the real thing as possible.
As a consequence we use a central web server as well, and keep a development branch of our subversion trunk on it. Generally we don't work on the same part of the system at once, but when we do need to do that, or someone is making a lot of substantial changes, we branch the trunk and create a new vhost on the dev server.
We also have a checkout of the code on the production servers, so after we're finished testing we simply do a svn update on the production servers. We've implemented a script that executes the update command on all servers using ssh. This is extremely convinient, since our code base is large and takes a lot of time to upload. Subversion will only copy the files that actually have been changed, so it's a lot faster.
This has worked really well for us, and the only thing to watch out for is making changes on the production servers directly (which of course is a no-no from the beginning) since it might cause conflicts when updating.
I never thought about having a repository copy on the server. After reading it, I thought it might be cool... However, updating the files directly in the live environment without testing is not a great idea.
You should always update a secondary environment matching exactly the live one (webserver + DB version, if any) and test there. If everything goes well, then put the live site under maintenance, update files, and go live again.
So I wouldn't make the live site a copy of the repository, but you could do so with the test env. You'll save SSH + compressing time, plus you can check out any specific revision you'd like to test.
Capistrano is great. The default recipes The documentation is spotty, but the mailing list is active, and getting it set up is pretty easy. Are you running Rails? It has some neat built-in stuff for Rails apps, but is also used fairly frequently with other types of webapps.
There's also Webistrano, which is based on Capistrano but has a web front-end. Haven't used it myself. Another deployment system that seems to be gaining some traction, at least among Rails users, is Vlad the Deployer.