Who's responsible for deployment? [closed] - deployment

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I work as an in-house developer for a manufacturing company. We make software for the manufacturing process, not really control software, more like process flow.
We are using a Scrum process to develop the software, albeit tailored to fit with our team and environment, and this is working quite well. We're coming to the end of a sprint and the software is at a stage where the product owner wants to deploy it.
Previously, i.e. before Scrum, we would have deployed the software. Now I feel like we have developed the software, we've passed all the user-defined/agreed release tests and demonstrated the software to the PO with a simulator, we have achieved our goals. We are ready to provide deployment support but I don't think it should be our responsibilty to deploy.
What are other peoples experiences? Should the dev team do the deployment or should we just hand over the completed software to the PO and provide support?
Summing up
A lot of great responses, thanks. The question may seem like I'm trying to squirm out of work or responsibility, maybe I am a little ;o) What I'm more interested in is other peoples processes. The problem we face here is that if the dev team deploy the software then we end up providing 24/7 support to production for the software. No prob, except there are only two of us. So, to allow us to get back to developing software rather than providing support all the time I thought it might be helpful to get the "IT" team involved in development process. Hopefully this will get the 'buy-in' and then allow them to deploy and provide first level support. We also have a plant in Mexico and its difficult for the dev team to go and deploy there, it makes more sense for the local support to do it, with guidance/advice from the developers.
Just to let you know, the IT engineers did deploy the software, with guidance/advice from dev. Its gone quite well, the customer is happy - he's got increased value for his software, and isn't that what its all about?

I don't think Scrum as a methodology addresses deployment responsibility. I've worked for large organizations with a deployment team completely independent of the development team. I've also worked at small organizations where the development team handles deployment. Ideally, the deployment team is separate from development, but it would almost never be the PO (which I assume means product owner). The PO usually signs off, but is not usually the best choice for handling deployment.

Who gets the call at 3am when the software isn't working or a system died? If it's the dev team then by all means expect to own the deployment (since you own production).
Best practice for organizations that can support is to provide the Operations group with deployment instructions and good wishes. Bottles of scotch help too.
If your production controls are lax, than tighten them up. A book like "Visible Ops" is a great guide to getting things under appropriate levels of control in the appropriate hands.

I'm a dev mgr with responsibility over multiple products. I have my dev teams produce builds of deployment artifacts, such as .war files, that can be simply deployed to Tomcat web server using it's manager interface or web service API. The configuration for the app is all set and self-contained within the .war file. Hence it is straightforward for the person doing deployment to just take it and "drop it in", so to speak.
If we don't get this level of ease of deployment to where deployment can be completely decoupled from the development team, then I view that as a failure on the dev team's part to adequately do their job.
The person doing deployment proceeds to release a given product to numerous customer sites - that's not a productive activity for me to let developers be doing - they have products to design and create as that is their specialization expertise.
In our organization the deployment responsibility also overlaps with the first tier production support responsibility.
We practice some scrum methodology but I've never viewed this issue as tied to software development process methodology, per se.

the software process is not completed until the working software is in the hands of the users that need it - otherwise it's just "shelfware"
if there is no one else to be responsible for deployment and configuration management, then you're it ;-)

I would think as an "In-house" developer it would be your responsiblity (unless there is a specialised deployment team) to deploy the new software whereas if you were from an external company then it would be upto them to deploy it themselves, unless specified in the contract.

Depends on the project and what "deployment" means for you. Since I am a web developer, deploying mostly .NET applications with an Sql Server database, I always prefer the deployment be done by a release manager or deployment manager. Why? Because separation of jobs ensures that problems are caught when they need to be.
The developers job should be to provide the required objects or instructions on how to deploy, then someone else deploys to a staging environment. If something goes wrong during deployment to staging, the deployment instructions are corrected until the deployment to staging works flawlessly. That way, there will hopefully be no mistakes when the same deployment script is used to move the code to production. In other words, not only must you test your code but also the deployment script.
of course, in the real world, this doesn't always happen because of personnel issues, but this would be my ideal.

Seems fairly simple to me- if not you, then who? Would actual deployment responsibility have fallen to some other team before you started using Scrum? If not, then I don't see why Scrum would change that.

If the dev team was providing the deployment before scrum, they should continue to do so, unless Management has specifically said that someone else should do it. If managment hasn't said, then they haven't really thought about it, and just expected that it would happen magically, like it always has.
If you don't like that, bring it up with management, but do the work until told differently.

I think you have to man-up and deploy the software. unless you are working in an organization that has some kind of serious data security issues and or SOX issues with allowing the unclean to deal with the production end of things.
I agree with the first comment - SCRUM has nothing to do with it. In fact I would think it's far better for you to be deploying as you'll know first hand how well things are working and be right there to get feedback from those users.

Related

Team based development environment setup for shared CMS development - best practice?

We're planning to select DNN+2sxc for a project for our team.
Normally when it comes to a CMS, I usually fly solo, but in a corporate .Net or Java environment it’s team collaboration, source control, Azure, deployments etc.
With our upcoming project we’re taking one of our main sites (C#/asp.net/razor) and converting it to DNN.
However, I’m currently unsure as to how to approach a CMS in a team development environment?
So in the development phase, we'll have some guys doing styling, others creating 2sxc reusable content templates and others building the actual pages. All at the same time, on the same website. In terms of Git/Visual Studio I'm not sure how it will actually work with relation to the DB especially. This question obviously applies to all CMSes (not just DNN) in a shared development environment.
What is the best practice to do this?
So I prefer to do most development locally, in my own instance (local IIS and local DB), with each individual project (module,theme/skin) in a separate repository. This makes the risk of me breaking someone else, or someone else causing me pain, minimal.
You can use a tool like Polydeploy to automate the deployment from the repository check ins into that upper environment. Requiring that individuals check code into the repository when ready to deploy to a test/uat/prod type environment.
Where it gets tricky is content for sure, I would typically do that in a test/uat environment that will ultimately be pushed to production once it is finalized.
I NEVER source control the DNN instance itself, that's just asking for pain.
This can be quite challenging, especially since some parts are user-data (which shouldn't be re-deployed on development) and other parts are dev.
There is a minimal guide to this here: https://docs.2sxc.org/abyss/enterprise-development/index.html

How do you track your current deployments?

Imagine there is an application consisting from bunch of microservices. All of these microservices can be developed/deployed completely independently from each other. Each microservice can be "described" with several attributes - e.g. current API version, release version, commit hash etc. Along with that, there are several environments used in development process - e.g. Testing environment (often called Sandbox), Staging environment, Pre-Release environment and obviously Production environment.
Is there a convenient tool/way/approach to track, basically, what attribute is currently deployed to which environment? For instance, get a quick access to information like "what is the current version of Restful API at Pre-Release environment"? Or more complex one - "what was this version two month ago"? And of course see the "global picture" as well?
Theres no ready to use solution on the market yet according to my knowledge.
Some teams are using git ops https://www.twistlock.com/2018/08/06/gitops-101-gitops-use/ to get ahead of the chaos challenge a lot of different micro services usually ship with.
Another technology in a somewhat different, yet related direction are micro service meshes, istio https://istio.io/ being one of them.
There are also test approaches like contract testing or heavy integration tests, that are more expensive, but also provide more confidence.

How do jenkins, Github and Puppet interact

First I should disclose I only manage vendor relationships and lack deep technical knowledge.
I just had a conference call with one of our sub-contractors who has asked me to sign off on a PO for 4 different servers (one for Jenkins another for Github, a third for Puppet and a fourth as a test box).
The technical architect seems quite defensive when I ask him questions. I know its not my job to question his ability but I do have a budget to manage and I am concerned they have over- engineered this (or at least the 3 products they have mentioned seem to overlap alot).
Would someone be so kind as to clearly explain to me the role each one plays.
I would also appreciate a second opinion as to whether they really need 4 servers and whether some of the technologies mentioned could co-exist on the same servers. (i.e could Jenkins, Github and Puppet all live on a single Ubuntu server ?).
The 4 tools do not overlap. They interact and complement altogether.
GIT is the source control tool. You store all the history of programming there. It's the dev blackbox.
Jenkins is the continuous integration tool. It will use GIT to get the latest version of the code ( or the testing version or the pre-production version ) to test it against the test patterns you defined.
Puppet seems to be a server administration toolbox.
Honestly, it depends on your project. If it's a huge app that requires heavy building cycles, then jenkins will be better off on its own server, so that people can still work normally with other services.
I believe Jenkins/Puppet could be on the same server. Or so could be Pupper and GIT ( git is very low resource ).
The testbox sounds ok. But I think that the GIT box could also be served as a testbox.
I think you could cut that server needs by 2. But again, it depends on the size of your project. If it's a big project, play it fair and let them do it. Even if the server split is a little strange, it could be necessary.
But frankly I think you don't really need that.

Deployment in an agile environment

In the past my development team we have mostly done waterfall development against an existing application and deployments were only really done towards the end of a release which would normally result in TEST, UAT, PROD releases normally only consisting of three to five releases in a two month cycle.
A release was an MSI installer, deployed via Group Policy.
We have now moved to a more agile methodology and require releases at least once per day for testing, some times more often.
The application is a VB6 app and the MSI was taking care of COM registrations for us, users do not have elevated privileges on their machines.
Does anyone have any better solutions for rapid deployment?
We have considered batch/scripted installs of the MSI, or doing COM registrations per file, both using CPAU for elevated privileges, and ClickOnce. Neither of these have been tested yet.
Edit: Thanks for suggestions.
To clarify, my pain point is the MSI build / deployment process takes a long time can take up to two hours to get the new build on to the testers desktops. The testers do not admin rights on their machine (and will not get them) so I am looking for a better solution.
I have played around with ClickOnce, using a dot net wrapper which starts up the application and has all the OCX/DLL vb6 assemblies as isolated dependencies, but this is having issues finding all the assemblies when it starts up, or messages to that effect.
CruiseControl and Nant are probably your best bet for builds with flexible output. But Rapid Deployment?
My concern is that you are looking at the daily builds in the wrong way. The dailies do NOT need to be widely deployed. In fact, QA and Development are the only ones who should care about the builds on a day to day basis. EVen then, the devs shouldn't be out of sync ;).
The customer team should only recieve builds at the end of a iteration. That is where you show them what you have done and they provide feedback and you move forward from there. Giving them daily builds could cause a vicious thrashing that would kill your velocity.
All that being said, a nice deployment package might be good for QA. But again, it depends on how in step they are with your development iterations. My experience, right or wrong, is that QA is one iteration back testing the deliverables from the last iteration. From that point of view, they should be testing with the last "stable" release as well.
Is this something you can do in a virtual machine? You could securely give your testers admin rights on the virtualized system and most virtualization software has some form of versioning so you can roll back to a "good" state if something goes wrong. I've found it to be very useful for testing.
I'd recommend ClickOnce with the option to update on execution. That way only people using the software receive and install the updates.
You could try registry-free COM. See this other question. ActiveX EXEs still have to be registered though.
EDIT: to clarify, using registry-free COM means the OCX/DLL components you mention don't need to be registered. Nor do any OCX/DLL components they use. You can just copy the whole application directory onto a tester's machine and it will work straightaway.
If I understand your question correctly, you need admin rights to install your product. I see three options:
1) Don't install to the tester's desktops. Get some scratch testing machines (as dmo suggested, VMWare might help) that you can safely give them admin rights to. This may mean giving them a test domain and their own group policy to edit.
2) Build a variant that doesn't require MSI installation, and can be executed directly. Obviously your testers would not be testing the deployment and installation process with this variant, but they could perform other tests of the product's functionality. I don't know if this is possible with your product; it would certainly be work.
3) Take your agile medicine: "[prefer] responding to change over following a plan". That is, if denying admin rights to your testers is interfering with their ability to do their jobs efficiently, then challenge the organization to give them admin rights. (from experience, this will mean shifting to #1, but it might be the best way to make the case). If they are expected to test the product, how can they not even be allowed to install it in the same way a customer would?
If the MSI deployment is taking velocity out of agile testing, then you should test MSI deployment less regularly.
Use XCOPY deployment wherever possible, using .local for COM components. This can be a problem with third party components. As third party components are pretty stable, you should be able to build a custom MSI for these, install them once and be done with it.
You should try an automated build/deploy process or script that you can manually run. Try Teamcity or CruiseControl. Good luck!
I'm not sure just precisely what your pain point is.
You specifically mention registration of VB6 COM objects. Does the installer sometimes fail because of that?
Is it that the installer works but people don't know to install the new build so they are more often than not reporting bugs on an old build?
If the former, then I suspect the problem to be that VB6 was very likely to play fruit basket turnover with the GUIDs when rebuilding the solution. Try recreating your public interfaces in MIDL and have your VB6 classes implement those interfaces.
If the later, then try Microsoft's SMS product. No, it has nothing to do with cell phones. If all the user's aren't on the same domain, then you will have to build an "auto update" feature into your product. Here is a third party offering that I've heard of but never used.
I'm using SetupBuilder (http://setupbuilder.com/products_setupbuilder_pro.htm) for all my builds. Very extensible. Excellent support.
Not sure exactly if it fits your needs, but this kind of post on the forums, "Installing as a limited account user (non-admin) on XP" (http://www.lindersoft.com/forums/showthread.php?t=11891&highlight=admin+rights), makes me think it might be.

Is automatic upgrades a realistic feature to expect from enterprise Web applications?

Most of the work I do is with what could be considered enterprise Web applications. These projects have large budgets, longer timelines (from 3-12 months), and heavy customizations. Because as developers we have been touting the idea of the Web as the next desktop OS, customers are coming to expect the software running on this "new OS" to react the same as on the desktop. That includes easy to manage automatic upgrades. In other words, "An update is available. Do you want to upgrade?" Is this even a realistic expectation? Can anyone speak from experience on trying to implement this feature?
At my company we have enterprise installations ranging into the thousands of seats. If we implemented an auto-upgrade, our customers would mutiny!
Large installations have peculiar issues that don't apply to small ones. For example, with 2000 users (not all of whom are, let us say, the most sophisticated of tool users), tool-training is a big deal: training time, internal demos, internal process documents, etc.. They cannot unleash a new feature or UI change without a chance to understand how it fits in their process and therefore what their internal best practices are and how to communicate that to their users.
Also when applications fail, it's the internal IT team who are responsible. Therefore, they want time to install a new version in a test area, beat it up, and deploy on a Saturday only when they're good and ready.
I can see the value in making minor patches more easy to install, particularly when the patch is just for a bug-fix and not for anything that would require retraining, and if the admins still get final say over when it's installed. But even then, I don't believe anyone has ever asked for this! Whether because they don't want it or they are trained to not expect it, it doesn't seem worth it.
Well, it really depends on your business model but for a lot of applications the SaaS model can end up biting you. It's great for a lot of things but for some larger applications the users are not investing as significant amount up front and could possibly move to something else before you've made any money.
See
http://news.zdnet.com/2424-9595_22-218408.html
and here
http://www.25hoursaday.com/weblog/2008/07/21/SoftwareAsAServiceWhenYourBusinessModelBecomesAParadox.aspx
for more information
One of the primary reasons to implement an application as a web application is that you get automatic upgrades for free. Why would users be getting prompted for upgrades on a web app?
For Windows applications, the "update is available, do you want to upgrade?" functionality is provided by Microsoft using ClickOnce, which I have used in an enterprise environment successfully -- there are a few gotchas but for the most part it is a good way to manage automatic deployment and upgrade of Windows apps.
For mobile apps, you can also implement auto-upgrades, although it is a little trickier.
In any case, to answer your question in a broad sense, I don't know if it is expected that all enterprise apps should make upgrading easy, but it certainly is worth the money from an IT support standpoint to architect them to allow for easy upgrading.
If you're providing a hosted solution, I wouldn't bother. Let the upgrade happen silently (perhaps with a notice that you did it). If you're selling an application that's hosted on their servers, let the upgrade decision be made by a single owner, not every user of the app.