Issue with deploying a React website to Firebase - firebase-hosting

I'm trying to host my very first website to Firebase.
I'm reading this tutorial but there seem to be something wrong.
I'm supposed to pick -
( ) Hosting: Configure and deploy Firebase Hosting sites
But that selection is missing in the list.
Instead I got -
( ) Realtime Database: Configure a security rules file for Realtime Database and (optionally) provision default instance
( ) Firestore: Configure security rules and indexes files for Firestore
( ) Functions: Configure a Cloud Functions directory and its files
( ) Hosting: Configure files for Firebase Hosting and (optionally) set up GitHub Action deploys
( ) Hosting: Set up GitHub Action deploys
( ) Storage: Configure a security rules file for Cloud Storage
( ) Emulators: Set up local emulators for Firebase products
So what do I do?
Which one should I choose?
Thank you

You're looking for this option:
( ) Hosting: Configure files for Firebase Hosting and (optionally) set up GitHub Action deploys
Deploying to hosting has been expanded to (optionally) also cover GitHub actions, but you can still do the basic scenario thought it too.

Related

will firebase init hosting rest setting on the firebase console after deploy --only hosting?

is there a way to default firebase hosting setting including domains added to the hosting from firebase CLI ? I am having an issue with one domain that for some reason is not redirecting or acting the way it should be acting .. and I want to make a hard rest to those settings including domain owner ship verification so I can add my dns keys again...

Connect amplify to github enterprise

We are trying to create an AWS amplify app. For CI/CD we want to integrate it with Github. I understand amplify has a way to add a Github account(personal with username and password), but I am not able to find a way to add a Github Enterprise account( that doesn't have such username and password credentials).
Is there a way to add Github enterprise to amplify, like how Codebuild allows to connect.
A GHE (GitHub Enterprise) server does support GitHub Actions
So check first if activating an action like amplify-cli-action or (depending on what you want to do) amplify-preview-actions would help in your case.
In term of credentials, those actions would need:
Navigate to AWS Identity and Access Management console
Under Users -> Add New User. Fill in the user name(GithubCI) and set Programmatic Access for Access type.
In permissions, select Create a new group, in a dropdown select Create policy.
In a policy creation menu, select JSON tab and fill it with a next policy statement, then hit review and save

Edit sql file to secure credentials during deployment of project in azure devOps

I am using an open source tool for deployment of schema for my warehouse snowflake. I have successfully done it for tables, views and procedures. Currently I'm facing an issue, I have to deploy snowflake stages same way. But stages required url and azure saas token when you define it in your sql file like this:
CREATE or replace STAGE myStage
URL = 'azure://xxxxxxxxx.blob.core.windows.net/'
CREDENTIALS = ( AZURE_SAS_TOKEN = 'xxxxxxxxxxxxxxxxxxxx' )
file_format = myFileFormat;
As it is not encouraged to use your credentials in file that will be published on version control and access by others. Is there a way/task in azure devOps so I can just pass a template SQL file in repo and change it before compilation and execution(may be via azure key vault) and change back to template? So these credentials and token always remain secure.
Have you considered using a STORAGE INTEGRATION, instead? If you use the storage integration credentials and grant that to your Blob storage, then you'd be able to create STAGE objects without passing any credentials at all.
https://docs.snowflake.net/manuals/sql-reference/sql/create-storage-integration.html
For this issue ,you can use credential-less stages to secure your cloud storage without sharing secrets.
Here agree with Mike, storage integrations, a new object type, allow a Snowflake administrator to create a trust policy between Snowflake and the cloud provider. When Snowflake connects to the organization’s cloud storage, the cloud provider authenticates and authorizes access through this trust policy.
Storage integrations and credential-less external stages put into the administrator’s hands the power of connecting to storage in a secure and manageable way. This functionality is now generally available in Snowflake.
For details ,please refer to this document. In addition, you can also via azure key vault, key vault provides a secure place for accessing and storing secrets.

Trying to add a new bucket in firebase storage. Cannot find GCP Mumbai(asia-south1). Any clue?

This link shows GCP Mumbai is launched. However when I go to my firebase billing account and try to add a new bucket from firebase console, cannot find Mumbai(asia-south1) region listed.
How can I debug this?
Create bucket in Mumbai GCP in Google cloud console.
Go to firebase console -> storage -> add bucket
Select "Import existing Google Cloud Storage buckets"
Buckets will be listed
Select bucket created in Mumbai GCP in Google cloud console
Select continue to import
I believe your current Firebase project is in a region that did have the Firebase Storage enabled. In this case, you will have to create a new Firebase project and select the location of your Firebase project to Mumbai, so you can create your buckets within this location, regions change is not supported in Firebase yet.

Detect Google Cloud Project Id from a container in Google hosted Kubernetes cluster

Detect Google Cloud Project Id from a container in Google hosted Kubernetes cluster.
When connecting to BigTable; I need to provide the Google Project Id. Is there a way to detect this automatically from within K8s?
In Python, you can find the project id this way:
import google.auth
_, PROJECT_ID = google.auth.default()
The original question didn't mention what programming language was being used, and I had the same question for Python.
You can use the metadata service. Example:
curl -H "Metadata-Flavor: Google" -w '\n' http://metadata.google.internal/computeMetadata/v1/project/numeric-project-id
This will work from any VM running on Google Compute Engine or Container Engine.
See https://cloud.google.com/compute/docs/storing-retrieving-metadata:
Google Compute Engine defines a set of default metadata entries that provide information about your instance or project. Default metadata is always defined and set by the server.
...
numeric-project-id The numeric project ID of the instance, which is not the same as the project name visible in the Google Cloud Platform Console. This value is different from the project-id metadata entry value.
project-id The project ID.
Google has some libraries for this too: ServiceOptions.getDefaultProjectId
https://googleapis.github.io/google-cloud-java/google-cloud-clients/index.html
https://github.com/googleapis/google-cloud-java/blob/master/google-cloud-clients/google-cloud-core/src/main/java/com/google/cloud/ServiceOptions.java
https://github.com/googleapis/google-cloud-java/tree/master/google-cloud-clients/google-cloud-core