Fastlane Match does not recognise repository after changing URL - fastlane

After changing the match repository URL from
ssh://git#192.0.1.34/Users/git/ios-provisioning-hq.git to ssh://git#172.16.0.34/Users/git/ios-provisioning-hq.git
running the match command does not recognise that there is an match repository already setup and asks me to set an initial password:
Enter the passphrase that should be used to encrypt/decrypt your certificates
This passphrase is specific per repository and will be stored in your local keychain
Make sure to remember the password, as you'll need it when you run match on a different machine (...)
How can I fix this without setting up a new repository for match?

Go to the keychain of the machine where you run fastlane and search for match
Delete the entry referring to your old URL named like match_ssh://git#172.16.0.34/Users/git/ios-provisioning-hq.git (Shouldn't make a difference but there is no reason to keep it)
Run match change_password from your project directory, enter your old password and a new one. You can use the same for old and new.
You should be able to provision again without match nuke

Related

How to deploy a Dash app with dash_auth to Heroku through GitHub branch tracking?

I'm building a Dash app using the basic authentication dash_auth. Unfortunately, this requires to hardcode a dictionary of usernames and passwords. This is not a huge problem since the app is only for in-house use.
Now, we would like to deploy this to Heroku by automatically tracking one branch of the GitHub repo because this seems most convenient. The problem is that this would require us to put the hardcoded passwords in the Github repository as well.
This post suggested using environment variables for tokens and client keys but how should I do this for dictionaries of passwords?
I'm open to alternative solutions as well.
Nothing really changes when doing this with a dictionary. You just need to parse the JSON string into a Python data structure.
In your application, instead of hard-coding the dictionary as shown in the documentation:
VALID_USERNAME_PASSWORD_PAIRS = {
'hello': 'world'
}
pull it in from the environment, e.g. something like this:
import json
import os
VALID_USERNAME_PASSWORD_PAIRS = json.loads(os.getenv("VALID_USERNAME_PASSWORD_PAIRS"))
Then set your usernames as Heroku config vars:
heroku config:set VALID_USERNAME_PASSWORD_PAIRS='{"hello": "world"}'
The single quotes here should avoid most issues with special characters being interpreted by your shell.
For local development you can set a VALID_USERNAME_PASSWORD_PAIRS environment variable, e.g. via a .env file if you are using tooling that understands that.
Another option for local development would be to hard-code just a default value into your script by adding a default argument:
VALID_USERNAME_PASSWORD_PAIRS = json.loads(
os.getenv("VALID_USERNAME_PASSWORD_PAIRS", default='{"local": "default"}')
)
Note that we give default a string here, not a dict, since we're passing the result into json.loads().
Be careful with this last option since you could accidentally publish the code without setting the environment variable, in which case the local default credentials would work.

Error in Google Cloud Shell Commands while working on the lab (Securing Google Cloud with CFT Scorecard)

I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}

Github - Failed to clone/remote sync or clone or any github activity

I have github on Windows-7. The github doesn't seem to allow me to check in code as something is messed up.
I did try changing the credentials & so forth by looking up online but nothing seems to work.
I still see the Bad credentials error alongside some wamp developer errors.
I don't know how wamp developer is related to GitHub.
I did have WAMP developer once upon a time on the PC.
The log file for the attempt is here: Github log file.
The error:
2016-03-22 12:44:50.7329|
ERROR|thread: 5|StartupLogging| MISSING PATH!!: 'C:\WampDeveloper\Components\Apache\bin'
That simply means your %PATH% currently reference one non-existent path: you could clean-up your environment variable PATH which is currently quite large.
This is not-blocking for GitHub Desktop though.
The other error is linked to a key previously used for:
Logged user r... off of host 'https://<server_url>'
When that key "C:\Users\ffgr.ghjk\.ssh\github_rsa" is used to authenticate to github.com does not work.
Make sure that key (the public one github_rsa.pub) is added to your GitHub account: "Adding a new SSH key to your GitHub account"

Should files involved in SSL certificate be kept confidential (added to .gitignore)?

In the process of setting up an SSL certificate for my site, several different files were created,
server.csr
server.key
server.pass.key
site_name.crt
Should these be added to .gitignore before pushing my site to github?
Apologies in advance if this is a dumb question.
Should these be added to .gitignore before pushing my site to github?
They should not be in the repo at all, meaning stored outside of the repo.
That way:
you don't need to manage a .gitignore,
you can store those keys somewhere safe.
GitHub actually had to change it search feature back in 2013 after seeing users storing keys and passwords in public repositories. See the full story.
The article includes this quote:
The mistakes may reflect the overall education problem among software developers.
When you have expedited programs—"6 weeks and you'll be a real software developer"—to teach developing, security becomes an afterthought. And considering "90 percent of development is copying stuff you don't understand, I'd bet most of them simply don't know what id_rsa is"
In 2016, this "book" (as a joke) reflects that:
The OP adds:
I think Heroku requires putting the files into the repo in order to run ">heroku certs:add server.crt server.key" and setup the cert.
"Configuration and Config Vars" is one illustration on that topic:
A better solution is to use environment variables, and keep the keys out of the code. On a traditional host or working locally you can set environment vars in your bashrc file. On Heroku, you use config vars.
The article "Heroku: SSL Endpoint" does not force you to have those key and certificate in your code. They can be generated directly on Heroku and saved anywhere else for safekeeping. Just not in a git repo.
I would like to add to #VonC 's answer, as it is in fact more complicated:
The files have different content, and depending on that they require a different access control:
server.csr: This is a certificate signing request file. It is generated from the key (server.key in your case) and used to create the certificate (site_name.crt in your case). This should be deleted when the certificate has been created. It should not be shared with untrusted parties.
server.key: This is the private key. Under no circumstances can this file be shared outside of the server. It cannot end up in a code repository. On the system it must be stored with 0600 permissions (i.e. read only) owned by either root or the web server user. At least in Linux, in Windows user access rights are handled differently, but it has to be done similarly.
site_name.crt: This is the signed certificate. This is considered to be public. The certificate is essentially the public key. It is sent out to everyone that connects to the server. It can be stored in the repository. (The hint from #VonC is correct, code and data should be separated, but it can be e.g. in a separate repository for the deployment).
server.pass.key: Don't know what this is, but it seems to contain the password to get access to the key. If this is the case the same rules as for the key apply: Never share with anyone.

Version Control a file with per machine dependent stuff

I have a config file in my project that includes some info that is per machine dependent (db username, password, path). I understand that in this particular case, I could enforce everybody to use the same username, db path, and password to keep this simple, but there must be another way to deal with this problem.
I use mercurial, if you care, but I am ok with just a theoretical answer if you are unfamiliar with hg specifics.
A common way to handle this is to put a config.example or similar under version control and force the user to copy it and make any necessary changes. That way the user can pull down the overall structure of the file from your repository without overwriting local changes.
Alternatively, you could make your config file provide only defaults, with the option to source a subset of variables from a higher-priority custom config file (in the same format) which the user may or may not provide.
You'll want to use the .hgignore file to not include the config file in the repository.
This will allow everyone to have their own version of the config file.
Basically, you just want to add the relative path to the config file and Mercurial commands will ignore it. So the file would look like this:
config/dbconfig.ext
Edit
I just realized you still want to be able to version control the config file (misunderstood the question). So I suggest moving the parts of the config file that are dependent into their own config file and then applying the fix above. That way, you can still have the regular config information under version control and keep part of it separate for each person's machine.
I have per machine databases for my PHP projects. What I do is check the hostname at runtime. If it is one host, I feed it certain credentials. If another, feed it different credentials.
On some systems I create a list of credentials and then just go down the line trying them until one of the connections works. If the list is exhausted, the connection cannot be made.
I've never found a solid method for handling this type of configuration files. My final solution was to just maintain a version of each file and use symbolic links. That way each server has the same file path, but different root file.
Without knowing exactly what is in your config file, I'm going to assume your file has some stuff that is machine-dependent (e.g., db password, paths) and other stuff that is not (db hostname, maybe some paths relative to a path that is configured on a per-machine basis, etc.)
If that's the case, what you want to do is re-factor your config file so that you have two config files---one for the common stuff, one for the machine-specific stuff. Check the common one in, and add the machine-specific configuration to the ignore file.