I have my custom domain in Firebase as well as the pre-generated domains:
myproject-cb169.web.app
myproject-cb169.firebaseapp.com
www.myproject.ca
Now via CLI I want to deploy my website but only to my custom domain (www.myproject.ca). How do I edit the rules/targets for this?
My current/default firebase.json hosting settings:
"hosting": {
"public": "public",
"rewrites": [{
"source": "**",
"destination": "/index.html"
}],
The URLs you specify are all pointing to the same instance of the exact same data. There is no way to make the change only available on your custom domain.
If you want people to use only your custom domain, be sure to promote that one and not either of the default generated ones.
Related
I was searching the web after information in regards to the question I have to add secrets and access policies to an existing keyvault in azure shade by others applications using ARM.
I read this documentation.
What I'm worried about is in regards to if anything existing will be overwritten on deleted as I'm creating a new template and parameter file in my services "solution" so to speak.
And I know that I have my CICD pipelines in devops set to "incremental" in regards to what it should be updating an creating.
Anyone have a crystal clear understanding regarding this?
Thanks in advance!
UPDATE:
So I think I managed to get it right here after all.
I Created a new key vault resource and added a couple of secrets and some access policies to emulate a situation of an already created resource which I want to add new secrets to.
Then I created this template:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"keyVault": {
"type": "string"
},
"Credentials1": {
"type": "secureString"
},
"SecretName1": {
"type": "string"
},
"Credentials2": {
"type": "secureString"
},
"SecretName2": {
"type": "string"
}
},
"variables": {
},
"resources": [
{
"type": "Microsoft.KeyVault/vaults/secrets",
"name": "[concat(parameters('keyVault'), '/', parameters('SecretName1'))]",
"apiVersion": "2015-06-01",
"properties": {
"contentType": "text/plain",
"value": "[parameters('Credentials1')]"
}
},
{
"type": "Microsoft.KeyVault/vaults/secrets",
"name": "[concat(parameters('keyVault'), '/', parameters('SecretName2'))]",
"apiVersion": "2015-06-01",
"properties": {
"contentType": "text/plain",
"value": "[parameters('Credentials2')]"
}
}
],
"outputs": {}
}
What I've learned is that if an existing shared key vault exists which I want to add some secrets to I only have to define the sub resources, in this case the secrets to be added to the existing key vault.
so this worked an resulted in not modifying anything else in the existing key vault except adding the new secrets.
even though this is not a fully automated way of adding a whole new key vault setup related to a new service, as one doesn't connect the new resources correctly by adding their principal ID's (identity). Its good for now as I don't have to add each secret manually. Though I do have to add the principal ID's manually.
When using incremental mode to deploy the template, it should not overwrite the things in the keyvault.
But to be foolproof, I recommend you to back up your keyvault key, secret, certificate firstly. For the access policies, you can also export the template of the keyvault firstly, save the accessPolicies for restore in case.
If you redeploy the existing KeyVault in incremental mode any child properties, such as access policies, will be configured as they’re defined in the template. That could result in the loss of some access policies if you haven’t been careful to define them all in your template. The documentation linked to above will give you a full list of the properties that would be affected. As per the docs this can affect properties even if they’re not explicitly defined.
KeyVault Secrets aren’t a child property of the KeyVault resource so won’t get overwritten. They can be defined in ARM either as a separate resource in the same template or in a different template file. You can define some, all or none of the existing secrets in ARM. Any that aren’t defined in the ARM template will be left as is.
If you’re using CI/CD to manage your deployments it’s worth considering setting up a test environment to apply the changes to first so you can validate that the result is as expected before applying them to your production environment.
At the end of the day, I'm trying to implement the solution linked from here: Reuse Github Actions self hosted runner on multiple repositories. But the tutorials walk you though setting up a GitHub app in the UI, and I'm trying to do it via the API.
Context:
Creating a new "GitHub App" (not "OAuth App") in GitHub Enterprise v3.0 (soon migrating to v3.1).
Trying to do it entirely over the API and explicitly NOT the UI, by creating an "app manifest" (https://docs.github.com/en/enterprise-server#3.0/developers/apps/building-github-apps/creating-a-github-app-from-a-manifest).
Everything I've read about permissions on docs.github.com ends up pointing over to https://docs.github.com/en/enterprise-server#3.0/rest/reference/permissions-required-for-github-apps, which does not include the specific values that can be used with the API.
On a GHE instance, there is a large list of permissions available at a URL with this pattern:
https://{HOSTNAME}/organizations/{ORG}/settings/apps/{APP}/permissions
The specific permission I'm trying set says:
Self-hosted runners
View and manage Actions self-hosted runners available to an organization.
Access: Read & write
In the documentation (https://docs.github.com/en/enterprise-server#3.0/developers/apps/building-github-apps/creating-a-github-app-from-a-manifest#github-app-manifest-parameters) there is a parameter called default_permissions.
What is the identifier (key) to use for this permission, where the value is write?
I've tried:
the documented Self-hosted runners
the guess self-hosted runners
the guess self-hosted_runners
the guess self_hosted_runners
the guess selfhosted_runners
the guess runners
…but ultimately, the actual values which can be used here are (as far as I can tell after several hours of digging and guessing) undocumented.
actions:read and checks:read appear to work. Those are also undocumented, but I was able to figure it out by looking at the URLs, making an educated guess, and testing.
All of the tutorials I can find on the internet, including those on docs.github.com, all walk you through creating a new GitHub app via the UI. I am very explicitly trying to do this over the API.
Any tips? Have I missed something? Is this not available in GHE yet?
Here is my app manifest, redacted.
{
"public": true,
"name": "My app",
"description": "My app's description.",
"url": "https://github.example.com/my-org/my-repo",
"redirect_url": "http://localhost:9876/register/redirect",
"default_events": [],
"default_permissions": {
"actions": "read",
"checks": "read",
"runners": "write"
},
"hook_attributes": {
"url": "",
"active": false
}
}
WITH the "runners": "write" line, the error message I receive says:
Invalid GitHub App configuration
The configuration does not appear to be a valid GitHub App manifest.
× Error Default permission records resource is not included in the list
WITHOUT the "runners": "write" line, the submission is successful.
The GitHub team finally updated the documentation. The permission I was looking for was organization_self_hosted_runners.
I'm using a the same destination on a number of apps, which are connecting fine.
Created a new app (using the same SAP WEB IDE template).
The Service is retrieved fine when selecting New/OData service from the project menu, proving my Destination credentials are fine.
Now, when I run the app. I'm getting a basic authentication window. Cancelling this means I can't connect to the metadata of the service and therefore can't retrieve any data.
https://webidetesting0837185-s0015641139trial.dispatcher.hanatrial.ondemand.com/SAPUI5-ABAP-SFI/sap/opu/odata/sap/ZSV_SURVEY_SRV/$metadata?sap-language=EN 401 (Unauthorized)
My username and password is not being accepted even though it's correct.
Any ideas?
If you User/Password is not accepted I think you missing some configuration in the backend, check the logs like ST22 or SLG1 for authorization issues. Also check if your destinations in Cloud Connector work properly.
To solve this in generell not using basic authentication, you need to work with SAP CP's destination service. Retrieving from onPremise or via AppToAppSSO as Type/Mode of the destination OR work with API Service on SAP CP. For first way change (destination service) reference in your SAPUI5 instead of relative paths in neo-app.json like this:
{
"routes": [
{
"path": "/destinations/SFSF_ODATA_PROXY",
"target": {
"type": "destination",
"name": "sap_hcmcloud_core_odata"
},
"description": "SFSF Proxy OData"
}
],
"cacheControl": [
{
"directive": "public",
"maxAge": 0
}
]
}
Make sure you enter the credentials for your backend (and not for your CP account for example). You can also try and maintain the credentials in the destination itself by setting AuthenticationType as BasicAuthentication.
I have already solved this issue with change Authentication as Basic Authentication
I am using GCP Container Engine in my project and now I am facing some issue that I don't know if it can be solved via secrets.
One of my deployments is node-js app server, there I use some npm modules which require my GCP service account key (.json file) as an input.
The input is the path where this json file is located. Currently I managed to make it work by providing this file as part of my docker image and then in the code I put the path to this file and it works as expected. The problem is that I think that it is not a good solution because I want to decouple my nodejs image from the service account key because the service account key may be changed (e.g. dev,test,prod) and I will not be able to reuse my existing image (unless I will build and push it to a different registry).
So how could I upload this service account json file as secret and then consume it inside my pod? I saw it is possible to create secrets out of files but I don't know if it is possible to specify the path to the place where this json file is stored. If it is not possible with secrets (because maybe secrets are not saved in files...) so how (and if) it can be done?
You can make your json file a secret and consume in your pod. See the following link for secrets (http://kubernetes.io/docs/user-guide/secrets/), but I'll summarize next:
First create a secret from your json file:
kubectl create secret generic nodejs-key --from-file=./key.json
Now that you've created the secret, you can consume in your pod (in this example as a volume):
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "nodejs"
},
"spec": {
"containers": [{
"name": "nodejs",
"image": "node",
"volumeMounts": [{
"name": "foo",
"mountPath": "/etc/foo",
"readOnly": true
}]
}],
"volumes": [{
"name": "foo",
"secret": {
"secretName": "nodejs-key"
}
}]
}
}
So when your pod spins up the file will be dropped in the "file system" in /etc/foo/key.json
I think you deploy on GKE/GCE, you don't need the key and it's going to work fine.
I've only tested with Google Cloud Logging but it might be the same for other services as well.
Eg: i only need the below when deploying app on gke/gce
var loggingClient = logging({
projectId: 'grape-spaceship-123'
});
I am trying to make an ajax call to XSJS service. With the new webide, we need to use the destination to make a call to the desired service. I have already setup a destination for my HANA System and exposed an XSJS service. What's the process to call the service from my controller file (in SAPUI5 App)?
Note: I have added the destination created in cockpit to my neo-app.json file as below:
{
"path": "/TestDest",
"target": {
"type": "destination",
"name": "TestDest"
},
"description": "Test Destination"
}
I would use JQuery.ajax as it is stated here
Check this link : https://blogs.sap.com/2016/04/13/sapui5-application-consuming-odata-service-with-sap-web-ide/
or configure the service url in component.js or manifest.json