How can I securely store a google keys file on deploy? - google-cloud-storage

Im using the Adonis GCS Drive and it uses the following environment variables:
GCS_KEY_FILENAME
GCS_BUCKET
So I went on the Google Cloud Console and generated it. On the GCS_KEY_FILENAME I set the path to the GCS Key File. And now Im not sure how can I deploy this.
I though on create the file dinamically by setting env variables with the file info, but Im not sure if its the best solution.
Obs: I intend to deploy to heroku.

Related

Why is the gcloud sdk's deploy command looking at my home directory for files?

I'm attempting to deploy a python server to Google App Engine.
I'm trying to use the gcloud sdk to do so.
It appears the command I need to use is gcloud app deploy.
I get the following error:
me#mymachine:~/development/some-app/backend$ gcloud app deploy
ERROR: (gcloud.app.deploy) Error Response: [3] The directory [~/.config/google-chrome/Default/Cache] has too many files (greater than 1000).
I had to add ~/.config to my .gcloudignore to get past this error.
Why was it looking there at all?
The full repo of my project is public but I believe I've included the relevant portion.
I looked at your linked repo and there aren't any yaml files. As far as I know, a GAE project needs an app.yaml file because that file tells GAE what your runtime is so that GAE knows how to deploy/run your code. In fact, according to the gcloud app deploy documentation, if you don't specify any yaml files to be deployed, it will default to app.yaml in the current directory. If it can't find any in the current directory, it will try to build one.
Your repo also shows you have a Dockerfile. GAE documentation for custom runtimes says ...Custom runtimes let you build apps that run in an environment defined by a Dockerfile... In the app.yaml file for custom runtimes, you will have the following entry
runtime: custom
env: flex
Since you don't have an app.yaml file and you have a Docker file in which you are downloading and installing Chrome, it seems to me that gcloud app deploy is trying to infer your runtime and this has led to it executing some or all of the contents of the Dockerfile before it attempts to then push it to Production. This is what is making it take a peek at the config file on your local machine till you explicitly tell it to ignore it. To be clear, I'm not 100% sure of this, just trying to see if I can draw a logical conclusion.
My suggestion would be to create an app.yaml file and specify a custom runtime. Or just use the python runtime with flex

How to set APNS Auth path on Heroku vapor app

How do you reference a file path when a vapor swift application is deployed on Heroku? This works on my local, but not when I deploy to Heroku. the local machine I added file path in environment variable like this APNS_AUTH_KEY_PATH: $(SRCROOT)/apikeys/AuthKey_Y8HP6L5K6P.p8 and it's working fine on the local machine. Added the same key path on the Heroku application config variable. But its saying, not able find file and application crashed on Heroku
{SRCROOT} is an Xcode concept that doesn't translate to Linux. So you either have to copy the key over into Heroku and reference the fully qualified path or just inject the contents of the key as the environment variables itself. The second option is far better as you're not committing the key into source control

Packaging SF service into a single file

I am working through how to automate the build and deploy of my Service Fabric app. Currently I'm working on the package step and while it is creating files within the pkg subfolder it is always creating a folder hierarchy of files, not a true package in a single file. I would swear I've seem a .SFPKG file (or something similarly named) that has everything in one file (a zip maybe?). Is there some way to to create such a file with msbuild?
Here's the command line I'm using currently:
msbuild myservice.sfproj "/p:Configuration=Dev;Platform=AnyCPU" /t:Package /consoleloggerparameters:verbosity=minimal /maxcpucount
I'm concerned about not having a single file because it seems inefficient in sending a new package up to my clusters, and it's harder for me to manage a bunch of files on a build automation server.
I believe you read about the .sfpkg at
https://azure.microsoft.com/documentation/articles/service-fabric-get-started-with-a-local-cluster
Note that internally we do not yet support provisioning a .sfpkg file. This is a feature that will be coming in soon (date TBD). Instead, we upload each file in the application package.
Update (SF 6.1 - April 2018)
Since 6.1 it is possible to create a ZIP file (*.sfpkg) and upload it to an external store. Service Fabric executes a GET operation to download the sfpkg application package. For more infos see https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-package-apps#create-an-sfpkg
NOTE: This only works with external provisioning, the Azure image store still doesn't support sfpkg files.

Google Compute Startup Script PHP Files From Bucket

I'd like to automatically load a folder full of php files from a bucket when an instance starts up. My php files are normally located at /var/www/html
How do I write a startup script for this?
I think this would be enormously useful for people such as myself who are trying to deploy autoscaling, but don't want to have to create a new image with their php files every time they want to deploy changes. It would also be useful as a way of keeping a live backup on cloud storage.

What would be the best way to use jammit and publish files on amazon S3?

I'm using jammit to package the js and css files for a rails project.
I would like now to upload the files to Amazon S3 and use CloudFront for the delivery.
What would be the best way to deal with new versions ?
My ideal solution would be to have a capistrano recipe to deal with it.
As anyone already done something like that?
You could simply create a capistrano task that triggers the copy to s3 after deploying.
You might use s3cmd as the command line tool for that.
Alternatively you could create a folder mounted by FuseOverAmazon, and configure it as the package_path in your jammit assets.yml. Make sure to run the rake task for generating the asset packages manually or in your deploy recipie.
http://s3tools.org/s3cmd
http://code.google.com/p/s3fs/wiki/FuseOverAmazon