Where to put "FASTLANE_SESSION" value? - fastlane

I'm using fastlane with Fastfile and Appfile
According this doc, i did create FASTLANE_SESSION variable in order to not enter a two-factor verification password every buildtime. But can't figure out where and how to put it to make it work. I don't use CI service, only fastlane in command line to deploy my ios build. Help please.

Run and follow the instructions: fastlane spaceauth -u some#email.com
When asked to copy the session, you can just say no. Fastlane will store it on your Mac.
You should really consider creating an API Key. Then you will avoid this.
Docs: https://docs.fastlane.tools/app-store-connect-api/#creating-an-app-store-connect-api-key

You need FASTLANE_SESSION available as an environment variable. If you're only running on command line you can do export FASTLANE_SESSION='<your-session>' and the next time you run the lanes that require the session should work.

Related

How to verify a contract on Avalanche testnet using Brownie

I am trying deploy and verify a contract using brownie on avalanche testnet.
The contract deploys and verifies fine on kovan. It deploys on avalanche testnet but I cannot get it verified.
The default brownie does not come with an explorer for avax testnet(kept getting explorer error) so I tried to add it.
I have tried variations of the testnet.snowtrace.io and they all give connection error except:
https://testnet.snowtrace.io/api - gives valueerror: error
I am using export SNOWTRACE_TOKEN= as per the documentation for avalanche and obtained an API key from https://snowtrace.io
Any idea IF and how this can be accomplished?
this does not seem to work on avax-test, using manual workaround so far ...
https://github.com/eth-brownie/brownie/issues/1417
Actually by default brownie "avax-test" network doesn't have set explorer field, So we have to set it manually by running below command,
brownie networks modify avax-test explorer=https://api-testnet.snowtrace.io/api
And you will able to verify contract.
Don't forget to add env variable,
SNOWTRACE_TOKEN=YOUR_TOKEN

Error in Google Cloud Shell Commands while working on the lab (Securing Google Cloud with CFT Scorecard)

I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}

How to use fastlane behind proxy

I can't find any option about fastlane to set the proxy. So does there has a direct way to solve this?
Thanks very much for any help!
I had the same problem and for me this site helped as fastlane is using Faraday internally. You have to set up the proxy environment variables for faraday with the following commands:
$ export http_proxy="http://proxy_host:proxy_port"
$ export https_proxy="https://proxy_host:proxy_port"
Any of the Fastlane tools that use spaceship (i.e. the Apple APIs) can be proxied using a combination of three environment variables.
SPACESHIP_PROXY: set the http proxy to use (SPACESHIP_PROXY =https://localhost:9090)
SPACESHIP_PROXY_SSL_VERIFY_NONE: when present, disables SSL verification (to allow inspecting HTTPS requests)
SPACESHIP_DEBUG: equivalent to SPACESHIP_PROXY=https://127.0.0.1:8888 SPACESHIP_PROXY_SSL_VERIFY_NONE=1, preconfigured for Charles Proxy defaults.
To use these, set them as environment variables in your shell, or prepend them to any fastlane command. For example, SPACESHIP_PROXY=https://localhost:9090 bundle exec fastlane
Source: Spaceship debugging documentation

Google Vision API - tatusCode.RESOURCE_EXHAUSTED

I am new to the Google Vision API and I would like to conduct a label detection of approx. 10 images and I would like to run the vision quickstart.py file. However when I do this with only 3 images then it is successful. With more than 3 images I am getting the error message below. I know that I would need to change something at my setup, but I do not know what I should change.
Here is my error message:
google.gax.errors.RetryError: GaxError(Exception occurred in retry method
that was not classified as transient, caused by <_Rendezvous of RPC that
terminated with (StatusCode.RESOURCE_EXHAUSTED, Insufficient tokens for
quota 'DefaultGroup' and limit 'USER-100s' of service
'vision.googleapis.com' for consumer 'project_number: XXX'.)>)
Does anybody know what I need to do?
Any help would be much appreciated
Cheers,
Andi
I ran into the same problem and fixed it with these steps:
Make sure you have the Google Cloud SDK properly installed: https://cloud.google.com/vision/docs/reference/libraries
Setup a Service Account in the Google Cloud backend: https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount
Create a Service Account Key and download it as a JSON file to a local folder. You need to keep the key private.
Export the filepath to the key-file as an environment variable: gcloud auth activate-service-account --key-file path/to/your/keyfile/here
Log out/in of the console.
Make sure, the environment variable is properly set with printenv
Try your py-script again...
Good luck...
Edit: In addition to the mentioned steps 1.-3. you can just do vision_client = vision.Client.from_service_account_json('/path/to/your/keyfile.json') in your script. No need for the env variable then.

Webistrano - how to clear global HTML cache after deployment

I am new to webistrano so apologies if this is a trivial matter...
I am using webistrano to deploy php code to several production servers, this is all working great. My problem is that I need to clear HTML cache on my cache servers (varnish cache) after the code update. I can't figure out how to build a recipe that will be executed on the webistrano machine (and will run the relevant shell script that will clear the cache) and not on each of the deployment target machines.
Thanks for the help,
Yariv
Simpliest method is to execute varnishadm tool with proper parameters inside deploy:restart
set :varnish_ban_pattern, "req.url ~ ^/"
set :varnish_terminal_address_port, "127.0.0.1:6082"
set :varnish_varnishadm, "/usr/bin/varnishadm"
task :restart, :roles => :web do
run "#{varnish_varnishadm} -T #{varnish_terminal_address_port} ban \"#{varnish_ban_pattern}\""
end
Thanks for the answer. I actually need to do some more stuf than to only clear the the cache so I will execute a bash script locally as described in below:
How do I execute a Capistrano task locally?