How can I move an app from one space to another? - swisscomdev

I have created a new space in the organisation and I have to move several apps from one space to the newly created one. How can I do this? (I don't want to rename the old space since only a subset of its apps need to be moved.

I'm afraid there is no such function to move apps from one space to another. There are two possibilities to achieve it, though:
Compile & Push
If you have access to the source-code of the app and use a common CI/CD setup, simply adjust your scripts, have them point to the new space and trigger a complete CI/CD cycle.
Download & Push
If you don't have access to the source-code, the binary-repository where you store your binaries or have to ensure it is 100% the same droplet in the new space, you can download that droplet from Cloud Foundry and push it to the new space:
cf app source-app --guid to get your source app guid
cf curl /v2/apps/:source-guid/droplet/download --output /tmp/droplet.tgz to download the source app's droplet to your local machine (note that if your Cloud Controller is configured to use a remote blobstore this command will return a redirect to the actual location of the droplet)
cf target -s destination-space to target the space you want to push the copied droplet to
cf push --droplet /tmp/droplet.tgz destination-app-name
Credits: Github
This will work for buildpack-based apps but not for Docker-based apps, as you can't simply download docker-binaries.
Pitfall: If you want to use the same route again, don't forget to delete (not just unbind) it from your old space.

Related

How to update files whenever script is scheduled to run in Heroku app

I have a simple python script that is hosted on Heroku and I'm using the Heroku Scheduler to run the script every hour/day. The script will possibly update a simple .txt file (could also be a config var if possible) when it runs. When it does run and conditions are met, I need that value stored and used when the next scheduled script runs. The value changed is simply a date.
However, since the app is containerized based on the most recent code I have on Github, it doesn't store those changes anywhere to be used again. Is there any way I can accomplish to update the file and use it every time it runs? Any simple add-ons or other solutions I can use?
Heroku Dynos have a local file system that does not survive an application restart or redeployment, therefore it cannot be used to persist data.
Typically you have 2 options:
use a database. On Heroku you can use (there is also a Free tier) Postgres
save the file on external storage (S3, Dropbox, even GitHub). See Files on Heroku for details and examples

Github still asking for credentials despite successful creation of deploy key?

I have created a deploy key according to the windows instructions here
But instead of using the deploy key that has just been set up, git push instead asks for credentials, first with a pop up, then with an SSH pop up, then in the git bash command line itself! This is quite shocking because the whole purpose of a deploy key is to avoid having to provide access to an entire github account
Given I have followed github's own instructions precisely and this isn't working, I am lost as to what to do next.
Notes
Some time ago, somehow, I set up a deploy key successfully on the same (windows) server. So perhaps the > 1 key on the machine is confusing some part of the process. I am not sure this has anything to do with it though.
I can see here that github expects keys to be named id_rsa and id_rsa.pub, but given this is my second deploy key running on this particular server, I named the second set differently so as to avoid overwriting the original set (the original set are still there, there are just two more files in C:\Users\[YOUR-USER-NAME]\.ssh\)

Simple workflow for spinning up an Azure VM from a snapshot in ARM?

I need to move from Azure Classic Portal to ARM for working with VMs by November and am trying to get a jump start on learning the new process.
Here is what I do in the Classic Portal now...
Make a Windows Server VM:
Add some software, make some changes, shut it down and click the 'Capture' button in Classic. Provide a name, and label and now have a Snapshot I can make new VM copies from. Easy!
Make a new VM from snapshot:
Click New, Virtual Machine, From Gallery, My Images, Select Image, Create. So easy!
That's it. That's all I do, and all I need to do.
I make 10-30 VMs at a time that way and it's really quick and easy.
How can I do that same workflow in ARM?
I have tried json templates, cmdlets, and the UI in ARM, and cannot for the life of me figure out how emulate the workflow/functionality in Classic in ARM.
Any suggestions?
Thanks in advance!
If I understand correctly , you want to create multiple VMs from one VM Image in ARM.
After you preparing your VM correctly, here is the workflow to Copy VMs from an exiting VM :
Go to Virtual machines >select your VM>Capture>Provide Image Name and Image label (select the box if you have Sysprep your VM)>OK
Go to Images> select the Image you created> Create VM>Provide some necessary Basic information> Provide Size Information>Settings>Summary OK(You can create a template and use it to create more VMs easily in this step)
Note : How to prepare and Capture a Windows VM to a generalized image ,refer to this official document.
But ,if you want to create multiple VMs from a Classic Image in ARM (Usually, we can only create Classic VM by a Classic Image), it may be a lot work to do:
Create a new ARM Storage Account
Copy the Specialized VHD from your Source Storage Account to the new ARM Storage Account
Create your new VM and point the Source VHD (Specialized disk) to the copied VHD
More details about how to move Classic Image to ARM and use it to create VMs you can refer to this link .

Docker and sensitive information used at run-time

We are dockerizing an application (written in Node.js) that will need to access some sensitive data at run-time (API tokens for different services) and I can't find any recommended approach to deal with that.
Some information:
The sensitive information is not in our codebase, but it's kept on another repository in encrypted format.
On our current deployment, without Docker, we update the codebase with git, and then we manually copy the sensitive information via SSH.
The docker images will be stored in a private, self-hosted registry
I can think of some different approaches, but all of them have some drawbacks:
Include the sensitive information in the Docker images at build time. This is certainly the easiest one; however, it makes them available to anyone with access to the image (I don't know if we should trust the registry that much).
Like 1, but having the credentials in a data-only image.
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now. This is very convenient too, but then we can't spin up new servers easily (maybe we could use something like etcd to synchronize them?)
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
PS: I've done some research but couldn't find anything similar to my problem. Other questions (like this one) were about sensitive information needed at build-time; in our case, we need the information at run-time
I've used your options 3 and 4 to solve this in the past. To rephrase/elaborate:
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now.
I use config management (Chef or Ansible) to set up the credentials on the host. If the app takes a config file needing API tokens or database credentials, I use config management to create that file from a template. Chef can read the credentials from encrypted data bag or attributes, set up the files on the host, then start the container with a volume just like you describe.
Note that in the container you may need a wrapper to run the app. The wrapper copies the config file from whatever the volume is mounted to wherever the application expects it, then starts the app.
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
Yes, it's cumbersome to pass a bunch of env variables using -e key=value syntax, but this is how I prefer to do it. Remember the variables are still exposed to anyone with access to the Docker daemon. If your docker run command is composed programmatically it's easier.
If not, use the --env-file flag as discussed here in the Docker docs. You create a file with key=value pairs, then run a container using that file.
$ cat >> myenv << END
FOO=BAR
BAR=BAZ
END
$ docker run --env-file myenv
That myenv file can be created using chef/config management as described above.
If you're hosting on AWS you can leverage KMS here. Keep either the env file or the config file (that is passed to the container in a volume) encrypted via KMS. In the container, use a wrapper script to call out to KMS, decrypt the file, move it in to place and start the app. This way the config data is not exposed on disk.

Sensible deployment using EC2

We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.
We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.
However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?
What's the best way to approach this?
Thanks!
We use a similar strategy to deploy, though we're not with Rightscale anymore.
I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.
If you want to do your ssh thing, then I'd go about it the following:
Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.
Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.