Deploy firebase cloud function with default user - deployment

I have created some cloud functions. I have functions that rely on a custom claim that has admin as a role. I want to be able to create a default user (set in an environment variable) to have admin role so it can be used to assign others if needed but can only be created during deployment. Is that possible to put in cloud functions so it creates a user when deploying the functions?

Is that possible to put in cloud functions so it creates a user when deploying the functions?
No. Any accounts required for deployment must exist prior to deployment. Whatever command you're running (gcloud or firebase CLI) can't double up work in a single command. You would need to run at least two commands to get everything set up, or create your own program or script that does all the work in one shot.

Related

Rundeck querying AWS WAF [Community edition]

I am new to creating jobs in Rundeck (community). I'd like to create a job under a project that accepts 2 parameters from the user (1. external/internal 2. IP CIDR) and then return if the IP CIDR already exists in WAF.
The current process is that user passes these parameters and the script has aws-vault command for the user to authenticate with the AWS account.
I have a shell script to do so but wondering how to do this using Rundeck jobs. Also, is there a way to allow the entire Rundeck instance (IAM roles?) to authenticate against a certain AWS account?
Thanks in advance.
To execute a script on Rundeck:
Create a new Project, create a new job, give it a name, on the workflow tab select the "Script" step (you can pass the parameters on the "arguments" textbox) put the parameters on the ), put the script content there, and save and run the job.
Create a new Project, create a new job, give it a name, on the workflow tab select the "Script file or URL" step (you can pass the parameters on the "arguments" textbox), put the script file path there, and save and run the job.
I have a shell script to do so but wondering how to do this using
Rundeck jobs. Also, is there a way to allow the entire Rundeck
instance (IAM roles?) to authenticate against a certain AWS account?
For EC2 remote instances, S3 actions, and some specific (and exclusive) Process Automation it's possible (the credentials are part of the plugin config).
For AWS WAF you can create a script using awscli tool with the rights parameters to execute it (or design your own AWS WAF plugin).
Anyway, take a look at the basic tutorial to learn how Rundeck works.

injected db credentials change when I deploy new app version to cloud

I deploy a web app to a local cloudfoundry environment. As a database service for my DEV environment I have chosen a Marketplace service google-cloudsql-postgres with the plan postgres-db-f1-micro. Using the Web UI I created an instance with the name myapp-test-database and mentioned it in the CF Manifest:
applications:
- name: myapp-test
services:
- myapp-test-database
At first, all is fine. I can even redeploy the existing artifact. However, when I build a new version of my app and push it to CF, the injected credentials are updated and the app can no longer access the tables:
PSQLException: ERROR: permission denied for table
The tables are still there, but they're owned by the previous user. They were automatically created by the ORM in the public schema.
While the -OLD application still exists I can retrieve the old username/password from the CF Web UI or $VCAP_SERVICES and drop the tables.
Is this all because of Rolling App Deployments? But then there should be a lot of complaints.
If you are strictly doing a cf push (or restart/restage), the broker isn't involved (Cloud Controller doesn't talk to it), and service credentials won't change.
The only action through cf commands that can modify your credentials is doing an unbind followed by a bind. Many, but not all, service brokers will throw away credentials on unbind and provide new, unique credentials for a bind. This is often desirable so that you can rotate credentials if credentials are compromised.
Where this can be a problem is if you have custom scripts or cf cli plugins to implement rolling deployments. Most tools like this will use two separate application instances, which means you'll have two separate bindings and two separate sets of credentials.
If you must have one set of credentials you can use a service key to work around this. Service keys are like bindings but not associated with an application in CloudFoundry.
The downside of the service key is that it's not automatically exposed to your application, like a binding, through $VCAP_SERVICES. To workaround this, you can pass the service key creds into a user-provided service and then bind that to your application, or you can pass them into your application through other environment variables, like DB_URL.
The other option is to switch away from using scripts and cf cli plugins for blue/green deployment and to use the support that is now built into Cloud Foundry. With cf cli version 7+, cf push has a --strategy option which can be set to rolling to perform a rolling deployment. This does not create multiple application instances and so there would only ever exist one service binding and one set of credentials.
Request a static username using the extra bind parameter "username":
cf bind-service my-app-test-CANDIDATE myapp-test-database -c "{\"username\":\"myuser\"}"
With cf7+ it's possible to add parameters to the manifest:
applications:
- name: myapp-test
services:
- name: myapp-test-database
parameters: { "username": "myuser" }
https://docs.cloudfoundry.org/devguide/services/application-binding.html#arbitrary-params-binding
Note: Arbitrary parameters are not supported in app manifests in cf CLI v6.x. Arbitrary parameters are supported in app manifests in cf CLI v7.0 and later.
However, I can't find the new syntax here: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block . The syntax I use comes from some other SO question.

Creating a user that's not a cloudsqlsuperuser in Cloud SQL using Terraform

I'd like to limit the privileges afforded to any given user that I create via the Google Terraform provider. By default, any user created is placed in the cloudsqlsuperuser group, and any new database created has that role/group as owner. This gives any user created via the GCP console or google_sql_user Terraform resource total control over any database that is (or was) created in a similar fashion.
So far, the best we've been able to come up with is creating and altering a user via a single-run k8s job. This seems circuitous, at best, especially given that that resource must then be manually imported later if we want to manage it via Terraform.
Is there a better way to create a user that has privileges limited to a single, application-specific database?
I was puzzled by this behaviour too. Its probably not the answer you want but if you can use GCP IAM accounts the user gets created in the PostgreSQL instance with NO roles.
There are 3 types of account you can create from "gcloud sql users create" or terraform module "google_sql_user"
"CLOUD_IAM_USER", "CLOUD_IAM_SERVICE_ACCOUNT" or "BUILT_IN"
The default is the built_in type if not specified.
CLOUD_IAM_USER and CLOUD_IAM_SERVICE_ACCOUNTS get created with NO roles.
We are using these as integration with IAM is useful in lots of ways (no managing passwords at database level is a major plus esp. when used in conjunction with SQL Auth Proxy).
BUILT_IN accounts (ie old school need a postgres username and password) for some reason are granted the "cloudsqlsuperuser" role.
In the absence of being allowed the superuser role on GCP this is about as privileged as you can get so to me (and you) seems a bizarre default.

Using AWS KMS and/or credstash with non AWS server

Is it possible to use AWS KMS and a tool like credstash without the use of EC2 or equivalent or does it rely solely on IAM roles?
I've got a server elsewhere where I am testing some things out and ultimately I will be looking at migrating an app to EC2 etc. to make use of scaling. But for now whilst I'm setting up my deployment pipeline etc. I wondered if it was still possible to make use of KMS on my non-aws provisioned server?
The only possible way I can think of is by installing the AWS CLI tools on the server in question. Does this sounds like the right approach?
What #Viccari said is correct (in the comments). In terms of what you want to do (store passwords), the AWS Parameter Store would be a good fit for you. See https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html for more information. The guide explicitly calls out your use-case:
Parameter Store offers the following benefits and features.
Use a secure, scalable, hosted secrets management service (No servers to manage).
In the end, if you end up using Parameter Store or KMS, you will need some sort of credentials stored somewhere to grab an AWS STS token to use to call the underlying AWS services. If working outside of AWS EC2, you will need the AWS Access Key and AWS Secret Key from an IAM user. If you are in EC2, the IAM instance role will magically provide you the credentials and use that role to call those AWS services. The AWS SDK does this for you behind the scenes.
But, as you state, you don't want to run this in EC2 (to save money, or other reasons). The quickest way to store these credentials is to have them in a un-tracked file (added to your .gitignore) you can source from as environment variables, which your program will then read. This allows you to do local testing, and easily run it in EC2
with zero code changes. See https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for what variables to set. Note that this doc talks about the CLI; the SDK's follow the same behavior.

How to use sensitive passwords needed to run scripts within RunDeck?

I have a case where the RunDeck scripts do need some credentials in order to run. Obviously we do not want to store these in the job definitions because these are visible and also stored in SCM.
While I was able to use the Key Storage vault to put these secrets in, I was not able to find a way to access them from the job itself.
Rundeck 2.6.2 (released 2015-12-02) allows you to specify key storage secrets as default values for secure job options. See Secure Options using Key Storage