I'm in the process of more correctly implementing Source Control via Mercurial at work and I've run into a situation. My environment is two programmers with a Server and approx 4 dev computers. There are our 2 Office desktops where the majority of the code writting happens. And then there are 2 laptops used in the Labs for testing and debugging.
Previously, we had just been operating over the network; the code projects lived on the server and both my office and the lab laptop opened the files over the network. Yeah, I know it wasn't the best of ideas, but we made it work. Moving to a more correct model of DVCS with local repos presents with me with a problem: How do I get my code updates from my Office where I was typing to the Lab so I can program an actual chip? I feel like this level of changes (10, 20, 50, maybe even 100 little changes over the course of a day of development) doesn't need to go through the Server. Personal opinion is that commits to the Server should be reserved for when I'm actually ready to share what I have with others... not necessarily finshed with the project, just ready to share where I'm at.
Do I have to push to the Server and then pull to the Laptop everytime?
Can I just push/pull back and forth between my Office and the Lab laptop repos? How would I set that connection up?
Under the assumption that the "Server" is CVCS-emulation in DVCS environment (i.e push target|pull source for all data exchanges exclusively) and "always working single branch" antipattern not used:
Each Dev-host work with at least two named branches: personal (for WIP) and shared (merge-target) "default". WIP have to be pushed to Server, every other host Sync local repository with the whole Server's repository (but "authoritative source" is only default branch)
Pure DVCS-model
Except "Server" as default path, each Dev-host have 3 additional entries for other Dev's workplaces and pull-only model used for simplicity (no additional ACL and rules for pushes). I.e. (with human's communication) local http-server (hg serve) activated on source(s) on demand and on target developer hg pull ANOTHERDEV. Source server can' be stopped after it. Personal named branches isn't bad idea in this case also
Note: `hg serve can be always enabled on all 4 dev-hosts, combined pull command (pull 3 another repos) xan be defined as alias on every host and used when needed, without additional negotiation
Related
in working on an experiment on a ML technique that required me to use a better machine for computational purposes, so they gave me an SSH connection to the machine. Also the data were stored in that server.
My workflow was this:
(I'm working on a headless server)
Connect my local machine via ssh and run the script for the experiments...
On that machine I could only use vim without all my setup
If I want to change something I have to change it in my local then push the changes
I pull the changes on the remote server and then I try a new experiment.
Occasionally I had to push from the remote server the results (plots and more) and then pull them from local to work on that and push again eventually.
I think there is a flaw in this, and there's a better way to manage all of these things.
Do you have some ideas?
What i need is just a clever way to do not push every change i do.
Another alternative is to use an IDE like VSCode with the Remote - SSH extension, following this tutorial.
That way, your local VSCode, on your local machine, displays and edits directly files on the remote machine, without you having to pull/push them.
Depending on that extension, you might still need a separate SSH session in order to git add/commit those modified files.
Vagrant uses the words "share" and "sync" seemingly interchangeably. Is there a difference? If so, what is the difference?
IMO, "sync" implies that the data is duplicated in two places, and Vagrant does some magic to ensure that changes to one are also made to the other. This is a slightly different semantics to "sharing". Which is Vagrant doing, or can it do both?
EDIT: for example, say, I want a VM running MySQL server, but storing the database files on the host. Is this kind of setup the kind of thing that shared/syncd directories are appropriate for? E.g., do I have a guarantee of atomicity/transactionality? Sharing semantics would guarantee this, but syncing semantics possibly wouldn't.
(To make things worse, there's also Vagrant Share, which is unrelated to syncing or sharing.)
shared folder (v1 terminology) VS synced folder (renamed in v2)
In short: Shared Folders is more VirtualBox specific (vboxsf) and have known performance issues as number of files grows.
Vagrant v2 (vagrant 1.1.x, 1.2.x +) docs use a more generic name called Synced Folder, which now includes many options: default vboxsf, rsync, samba/CIFS, NFS.
By default, vagrant sync the project directory (where Vagrantfile resides) with /vagrant within the guest. This can be disabled by explicitly disable it in Vagnrantfile and do a vagrant reload.
e.g. config.vm.synced_folder ".", "/vagrant", disabled: true
To see a long story, see this answer: https://stackoverflow.com/a/18529697/1801697
Let's talk about sync
For vboxsf and nfs, host and guest folders (I mean synced folders) are always in sync (changes made on either side is synced to the other).
NOTE: SMB/CIFS should be the same but I've never used it.
In vagrant 1.5, rsync type is added, which makes manual sync possible, by default it sync from host to guest upon 1st vagrant up. I personally prefer rsync if real-time sync between host and is NOT needed.
BTW: Vagrant share is something different, it's sharing SSH access or other services via a cloud gateway.
I've deployed in Windows Azure a website made with Umbraco, using
Windows Azure Accelerator for Umbraco.
For development and test i used a test Hostname. Now it's time to switch to the official DNS hostname..
How can I change current hostname?
Actually i configured hostname at deployment time (the only way i know to do this) but i can't deploy again, since many files have been changed working on website on Azure.
EDIT
Let me explain: at the step prompt in the image (during web site deploying) I used as Domain Name "test.mywebsite.com", and configured real DNS.
Now the website is configured, so I'd like to make mywebsite.com point to that site;
But is'nt enough if i configure mywebsite DNS! Shall I deploy again? An will I lose any of the changes I made?
I'd like to make two comments on your question:
1) In order to host your Azure application under a custom host name, you will need to sign up with a DNS provider that supports C-NAME records (most do). I suggest someone like GoDaddy.com because by default C-NAME records can only resolve your "www.domainname.com" records and cannot do anything for queries where "www." is dropped from the URL. DNS providers like GoDaddy also have an option to redirect all traffic destined for "domainname.com" to a URL of your choice. This is a huge deal for Azure apps. Frankly speaking, it is somewhat disappointing that for all the PaaS and IaaS features of Azure, DNS was not included in the overall package.
2) I am a little worried when you say that you can no longer redeploy your app due to the changes made. Can you elaborate on that? Have you made changes to the application's code running on VM's in Azure without going through redeployment process? If so, this is a huge no-no. Your VM's running in Azure are not "permanent". Microsoft and your redeployment process can (and will) re-stage those VM's to the original package at any given time. Microsoft will re-image your VM's at least once a month during their monthly OS upgrades. But they can also do so when they need to move your VM to another rack, etc. Whatever changes that you make to your app must be either stored in source-control before deployment or in a permanent storage facility like SQL Azure, Azure Storage, etc.
HTH
Finally i think that the answers to my questions are:
-Shall I deploy again? Yes, i must deploy again
-Will I lose any of the changes I made? Many changes will be mantained since are stored into DB. But I have to do many activities to make new website work!
This answer confirms my theory:
In my case, I created and uploaded a site with a name, let's say
http://www.contoso.com and then paid a domain from a registrar let's say
http://www.example.com, when I mapped
http://MyAcceleratorsService.cloudapp.net/ to my new domain
( http://www.example.com ) and tried to open that domain I got the home page of
the Accelerator and not the uploaded site.
I had to upload the site again to Azure (using UploadUmbracoSite.cmd
from Accelerator application) and when uploading enter the same domain
name as the one I registered: http://www.example.com. Then, I was able to
browse my uploaded site as expected.
As for your question, will upload site again using
UploadUmbracoSite.cmd (is in the Setup folder) and will enter the new
domain name when requested.
Exactly what I was trying to avoid.. but the only solution, i suppose.
Well it was not easy to publish again, i got errors of many type (i suppose tied to some components that i've installed after deploy and that are not installed in new deployed website).. i'm going to solve them.
Edit
Completed my work:
- loads of different attempts, no-one worked
- CTP backup of DB
- deleted DB and website
- new full deploy of umbraco
- CTP restore of DB
finally:
-all work on content is OK
-all work on styles, pages, templates is lost
Changing hostname is hard; dont'use test hostname but definitive hostname from the beginning.
If anyone has suggest, i'll be pleased to test it, anyway
This is not really an answer to your question, but it might be a solution to your problem: Use a CNAME record to make the production DNS name point to your development name. E.g. www.productionname.com will the point to www.testname.com. I am not sure if everything will just work out of the box, but it seems to be worth a try.
This requires, that your hosting provider allows you to set up CNAME records.
http://en.wikipedia.org/wiki/CNAME_record
We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.
We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.
However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?
What's the best way to approach this?
Thanks!
We use a similar strategy to deploy, though we're not with Rightscale anymore.
I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.
If you want to do your ssh thing, then I'd go about it the following:
Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.
Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.
We use Windows Azure Cloud services to host our application. One of the great features of Windows Azure is the Production/Staging model. You can have the clients of your application routed to your production server, while you can test your new code running on a staging server. For example, you can configure Azure to point a production server to http://www.coolapp.com while designating a staging server for the same app to something like this: http://7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net.
Physically both of these servers are publicly facing. If you were to know the cryptic URL of a staging server you would be able to browse to the app just as easily as you would browse to www.coolapp.com. However, the presence of a GUID in the URL makes it virtually impossible for someone to guess it, thus making the staging server "private". This gives a nice mechanism to the developers of an application to deploy and test the new bits on a staging server before releasing them to public. Once they make sure that things look good, with a flip of a switch they swap the two servers, making staging server a production server and vice versa.
This model creates a small problem for us in relation to Facebook integration. To be able to integrate Facebook plugins you have to register your app with them. FB will then issue an AppId and an AppSecret keys. These keys are tied to the URL of your application. So in order for my app to work with FB plugins I need to obtain one set of keys that is tied to 7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net, and another set that is tied to www.coolapp.com.
When I read about Windows Azure, they really urge developers to treat staging vs. production servers as the same. The only difference between them should be the URL. In other words, Azure does not recommend basing your app logic on which server the code happens to be running on as Azure has no inherent knowledge of this. Staging vs. production is just a handy "abstraction" if you will. I guess you see the problem here. In our example above, I have to use one set of keys issued by FB versus another depending on which URL (production vs. staging) my app is running at. I assume I am not the first one running into this problem. What are the correct ways of handling this? One obvious way is to sniff the URL property of the Request object and branch my logic that way. However, intuition tells me this is a hack. Any other ideas?
Regards,
Archil
The mechanisms I know of are:
using "production" within a totally separate service account to "testing" - this leaves "staging" within the production service to be used as an area for "deployment candidates" and provides a separate clean testing domain with a non-changing URL for earlier "dev and test" work.
using different .cscfg files for staging and production - and being careful to update this .cscfg before you do any live switching.
sniffing the incoming URL - as you suggest
Personally, I use the first of these techniques - its easy and it helps prevent nasty accidents
As an aside, one of the techniques we've used for "removing" the Guid from staging is to CNAME the Guid with a really short TTL on the DNS - this allows us to quickly and automatically update the CNAME record for the staging server when we deploy.