Refreshing Global configuration details of a spring cloud project without triggering manual 'curl -X POST http://localhost:8890/actuator/refresh' - spring-cloud

We are creating a spring cloud configuration by which we can change the global configuration files. Though we are using #refreshscope to reflect the changes in browser refresh, we also need to trigger 'curl -X POST http://localhost:8890/actuator/refresh' command from cmd. Could you please let us know is there any way to avoid the manual triggering?
We tried to refresh the global configuration values of property files without triggering the actuator from cmd but it didn't work.

Related

How to see the headers in a request while using Kong Ingress controller

In any ingress controller like https://traefik.io/ there is usually a DEBUG mode to run. In this mode you can actually see the headers in the request which help in debugging.
However in https://docs.konghq.com/kubernetes-ingress-controller I am unable to find the same.
I have installed kong using helm and it installs and runs fine.
I want to be able to see the headers in a request and I can't seem to find a way around it. There is no logging level defined in values.yaml file.
Turns out you can enable debug mode by using the following environment variable. Add the following with other environment variables to get log in DEBUG mode
env:
log_level: debug
However, to see the values of requst headers and response you will need the following
plugin
You might want to redirect it to /dev/stdout so it comes in output of kong logs.
for debugging one can send a header; Kong-Debug: 1 (see https://docs.konghq.com/gateway-oss/2.5.x/proxy/#evaluation-order ) and there is an upcoming change in 2.6 that might be helpful as well; see https://github.com/Kong/kong/pull/6744

Keycloak Admin CLI - Updating a realm with JSON file

Objective:
Our objective is to update the entire realm provided a json file.
Problem:
The issue at hand is we cannot seem to update the realm entirely to include the client changes as well.
Actions taken:
Option 1:
Based on the Keycloak Admin CLI documentation, a Keycloak realm can be updated from a JSON file using the following command:
kcadm.sh update realms/demorealm -f demorealm.json
However, when making an update to a property within the clients section of the JSON file (i.e. a client's description), the change is not reflected within the Keycloak realm.
We also tried to take a look at the kcadm.sh help update . We tried to utilize the merge flag (Merge new values with existing configuration on the server. Merge is automatically enabled unless --file is specified) . We do have a file specified and therefore tried to enable it using the flag - but to no success. The clients did not change as expected.
Option 2:
We have tried the partial import command found in Keycloak documentation
$ kcadm.sh create partialImport -r demorealm -s ifResourceExists=OVERWRITE-o -f demorealm.json
With the ifResourceExists set to OVERWRITE, it accurately changes clients. However, it alters other Realm configurations such as assigned users roles. Ex: After manually creating a new user via the Keycloak UI and setting roles for the user, the roles are lost after running the command with the OVERWRITE flag set. Setting the ifResourceExists to SKIP does not properly update values for a client as it is skipped altogether.
Question:
Is it possible, either with a different command or different flags, to update a Keycloak realm in its entirety with a single Keycloak admin command? Neither Option 1 or Option 2 listed above worked for us. We want to avoid making individual update client calls when updating the Realm.
Notes:
We have properly authenticated and confirmed that changes made at the realm level are reflected in Keycloak.
After further research, the approach we decided to go with is to update realm level settings with:
kcadm.sh update realms/demorealm -f demorealm.json
We then iterate over the clients and add/update them with:
kcadm.sh update clients/{clients-uuid} -f clientfile.json
Since the previous command does not update client roles, we must then use the following command to add the roles:
kcadm.sh update clients/{clients-uuid}/roles/{role-name} -f rolefile.json
Finally, to add in composite roles, we use this command:
kcadm.sh add-roles --cclientid {clientID} --rid {id of client role} --rolename {name of role to add}

Error in Google Cloud Shell Commands while working on the lab (Securing Google Cloud with CFT Scorecard)

I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}

How to pass API parameters to GCP cloud build triggers

I have a large set of GCP Cloud Build Triggers that I invoke via a Cloud scheduler, all running fine.
Now I want to invoke these triggers by an external API call and pass them dynamic parameters that vary in values and number of parameters.
I was able to start a trigger by running an API request but any JSON parameters in the API request that I sent were ignored.
Google talks about substitution parameters at https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values. I define these variables in the cloudbuild.yaml file, however they were not propagated into my shell script from the API request.
I don't any errors with authentication or authorization, so security may not be an issue.
Is my idea supported at all or do I need to resort to another solution such as running a GKE cluster with containers that would expose its API (a very heavy-boxing solution).
We do something similar -- we migrated from Jenkins to GCB but for some people we still need a nicer "UI" to start builds / pass variables.
I got scripts from here and modified them to our own needs: https://medium.com/#nieldw/put-your-build-triggers-into-source-control-with-the-cloud-build-api-ed0c18d6fcac
Here is their REST API: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/run
For the script below, keep in mind you need the trigger-id of what you want to run. (you can also get this by parsing the output of another REST API.)
TRIGGER_ID=1
# we need to specify ATLEAST the branch name or commit id (check after)
BRANCH_OR_SHA=$2
# check if branch_name or commit_sha
if [[ $BRANCH_OR_SHA =~ [0-9a-f]{5,40} ]]; then
# is COMMIT_HASH
COMMIT_SHA=$BRANCH_OR_SHA
BRANCH_OR_SHA="\"commitSha\": \"$COMMIT_SHA\""
else
# is BRANCH_NAME
BRANCH_OR_SHA="\"branchName\": \"$BRANCH_OR_SHA\""
fi
# This is the request we send to google so it knows what to build
# Here we're overriding some variables that we have already set in the default 'cloudbuild.yaml' file of the repo
cat <<EOF > request.json
{
"projectId": "$PROJECT_ID",
$BRANCH_OR_SHA,
"substitutions": {
"_MY_VAR_1": "my_value",
"_MY_VAR_2": "my_value_2"
}
}
EOF
# our curl post, we send 'request.json' with info, add our Token, and set the trigger_id
curl -X POST -T request.json -H "Authorization: Bearer $(gcloud config config-helper \
--format='value(credential.access_token)')" \
https://cloudbuild.googleapis.com/v1/projects/"$PROJECT_ID"/triggers/"$TRIGGER_ID":run

Is it possible to ignore Ember CLI's http-mocks proxy server?

I've created some ember g http-mocks, and they live in my ember app under /server.
I'd also like to be able to run my ember cli app, proxying all api requests to localhost:3000.
It seems if /server exists, ember will use it, regardless of the -proxy flag. I found some discussion about a --no-http-mocks flag near the end of this issue, but I don't think there's a formal proposal yet.
Is there any hackish intermediate way to get ember cli to ignore /server, other than by deleting the entire /server directory?
You could probably set up ember-cli to enable/disable http-mock based on an environments.js variable as described here:
http://discuss.emberjs.com/t/how-to-disable-http-mock-server-within-environment-config-file/6660
Then to manually flip http-mock on and off as you like, pass in a process environment variable when you run your ember serve at command line. Seems like the way to do that is in this SO post:
How to pass API keys in environment variables to Ember CLI using process.env?