Scenario:
as development team member I create some .dll files and .ps1 files
I'm trusted that I produced correct code
I'm required to sign those files but security team is in charge of signing certificates
I could just send them all my binaries and wait for them to return signed
... but I produce quite huge .zip archive ...
So I'm thinking if there is a way to save us all this sending binaries back and forth.
Question:
Is that possible that I generate any SHA-1 / SHA-256 / MD5 / whatever hashes from all my files, then I send those to security team, they generate signatures, send them back and I attach them to original files.
Related
as we write passwords in key.properties file for signing flutter APK. its not dangerous? how we can make it secure after debugging and reverse engenearing?
storePassword=
keyPassword=
keyAlias=
storeFile=
This is already a secure mechanism that is followed by Android.
Full Article : Article to refer
In Brief:
Creating a keystore file is quite similar as storing configs in environment variables, By default if you generate or sign app using android studio it stores the credentials directly in the gradle file so instead of this when we are working in teams we store these in a different file which is not included when we build and also can be excluded when from source control using .gitignore
So we use this keystore variables while signing the app instead of hard coded strings.
Another reason is the .jks file which is indeed really important exists on your pc only and without which you cannot compile the app.
There are practices that you can follow to ensure security like using Proguard and code obfuscation, flutter still is in growing stage so they would likely ensure the use of NDK with which one can write the files natively into .so files, which are much less likely to be decompiled than APKs.
To keep the file private, add it to the .gitignore file:
**/android/key.properties
I need to create a script in powershell. That validates the hash of a file located in a web site and if there are any changes in the file start downloading the file.
There is a way to validate the hash of the file without previously downloading the file to the local machine.
You need to have the content of a file to create a hash for it, so you always have to download the file first. This is easy in PowerShell, download the file and create the MD5 hash (checksum).
If you own the web site, create the MD5 hash on the server and just download that hash and compared it to a locally stored hash, if different, download the whole file.
I don't think there is a way to do this without downloading the file. An md5 checksum is generated from the file's contents, so in order to generate it with powershell you need the contents. Ergo you need to download the file.
I would advise generating the checksum directly on the web server via php or whatever language you use there. You could maybe save the checksums in a separate metadata file or append it to the original file's name. Then you can compare the checksum without downloading the full file.
Background: I have a .NET application, which must be deployed and auto-configured to work in multiple third-party environments. Currently, it gets deployed via posting a customer-compiled MSI to the intranet. The reason why MSI needs to be customer compiled is to specify deployment parameters, such as internal web service URLs to connect to.
Problem statement: both MSI and ClickOnce deployment installations must be signed; otherwise security popup shows. I have a signing key, but cannot sign them, since there is no information about which customer environment will it be used in. Customer has information about environment, so he can delpoy as ClickOnce or build MSI, but cannot sign them because he's got no key.
Question: Is it possible to launch pre-built MSI, executable, or ClickOnce application, from a web page, while supplying them parameter(s), such as the URL? Alternatively, is it possible to generate deployment package, such that it can determine from which URL it was downloaded (so that it is used to discover environment)?
Example Solution: One way of addressing the problem is renaming the file itself. For example: rename mysetup.msi to aHR0cDovL215aG9zdC5sb2NhbC9jb25maWcueG1s.msi. This will not break digital signature because file itself is not altered. But the name of MSI is accessible to custom actions, so an action would convert file name to a text and learn that it should read configuration from http://myhost.local/config.xml. That would work, but is pretty ugly. I'm looking for more elegant solution.
I have some difficulties to follow the reason for the "customer-compiled" msi strategy. Quite uncommon thing this :-)
Yes, if you mean with web deploy, the user shall double click on the msi, you cannot pass parameters.
Solution 1: Of course you could ask the user in setup dialogs to fill in the data. There was an earlier answer for this than mine.
Solution 2: Hardcode dependencies. You ask all your customers for their web URLs, try to find out a domain name or special registry key or environment variable for the machines of your customers and put this logic into the msi. The msi will be started on this special environment and could find the correct data. You can even ping different URLs or IPs to assure the environment. Takes some time but..
Not very beautiful, but you could get money every time they want to change something :-)
Solution 3: Prepared machines at the customer
Tell the customer: OK, custom compile is not a good solution, If have to deliver one sigend setup, please prepare all your machines by Group policy with a defined registry value(s) where all you specific URL data are captured. These are read out either by setup or by the application itself. Or let him put a custom .config file at a specific place on his own.
Solution 4: Two-step-deployment
Deploy the .config file with predefined custom URLs or other configs separately from you MSI. In your MSI you only check, if it exists in the same path already. if you choose an .ini file format, (instead of .xml) MSI is able with standard methods to read them in MSI properties. .XML is supported by tools like InstallShield or others.
Solution 5: Somewhat similar: Don't care for configuration in the setup. Install with the URL/config information and ask the user the first time the app starts, to provide the data or give a path for a config file containing the information.
Solution 6: If the other solutions are not for your case, let the customer sign the MSI with their own certificate. Make a batch script to help him. If the company is not capable of buying an own certificate, buy one separate certificate anywhere for each customer, and include the price in your price for the product and the support :-)
I think 4 and 5 are my favourites, actually.
An MSI supports passing parameters to it using public properties. The limitation that you have is from VS setup project, i.e. I don't know if it has the support to help you configure the package to accept the parameters.
The following tutorial about dialog editing, made with Advanced Installer, can show you what a standard MSI can do. A more advanced example is of importing an XML file in your setup, like a web.config, and configuring it to be updated at install time with connection parameters entered by the user during the install.
Of course all of these parameters support to be passed on the command line, during silent installation, it goes something like this: msiexec /i [msi-path] /qn MY_URL="http:www.example.com" USERNAME="John Doe"
Basically any column from a MSI table that accepts formatted data can be used to refer properties set by the user at install time, from the command line or the installer UI. The only limitation comes from the tool that you are using to build the setup package.
When I started my current job, I was told to install the Subversive plugin for Eclipse, and given the URL of the repository to pull projects down from. My username and password were/are the same as my Active Directory credentials. So I installed the plugin, created a new repository (don't remember how, but it was easy to do), and have never looked back.
I am now being transitioned to a different team, who also use SVN for source control, but have it set up on a completely different server. I was asked to put in a ticket with the systems people to request access to this SVN server so I could access this other team's code.
The systems person assigned to my ticket just sent me the following email:
Attached are the pkcs12 files that are needed for your access to SVN on [svn.someserver.com]. You’ll need to put these files on your local systems and then add the following configuration to the ~/.subversion/servers file, for your SVN client. I just use the svn command on linux, so my home directory contains the .subversion directory and the servers file is in that directory. I will send your password separately.
Note: I have a Windows machine, so a part of my confusion may stem from the fact that the tech is on Linux and I am on Windows 7.
The attachment was a ZIP file that extracted two separate files:
foo.pem - a PEM file (?)
atannon - a "Personal Information Exchange" file (?); same as my username
The tech followed up with an email giving me my password in cleartext.
I checked my home directory and do not see a .subversion or .svn hidden directory anywhere. I am wondering if I need to follow his directions, but using my Program Files/eclipse/ directory instead.
So I have several questions here, all relating to how to configure SVN access in the manner prescribed by this systems tech:
Why was it so easy for me to get set up with the first SVN server when I started my job (just install the plugin and find the repo through Eclipse's Repo Explorer), and why does this server require so much configuration? I assume there are multiple methods for gaining access to a SVN server, and this 2nd team just uses a more lengthy setup method?
Can someone give me a super-quick rundown of what each of these files are and what purpose they serve? And why I need to install them locally on my system?
Where should I install these files? The tech wanted me to put them in my ~/.subversion directory, but I never created one because they only SVN client I ever installed was Subversive (through Eclipse)
I tried creating a new repository for [svn.someserver.com] in Eclipse. I supplied my username and the cleartext password the tech sent me and now it is giving me a dialog stating I need to "Provide authentication information", asking for SSL settings, and specifically a File and a Passphrase for the Client Certificate...would the files he sent me suffice for this? If so, perhaps the answer to my question above just requires knowing which files to point Eclipse to, and I don't have to install these files anywhere
I usually don't like to ask multiple questions inside of one giant question, but these are all so similatrly in nature, I didn't want to clutter SO with too many closely-related questionss.
Thanks in advance for any help here!
Why was it so easy for me to get set up with the first SVN server when I started my job (just install the plugin and find the repo through Eclipse's Repo Explorer), and why does this server require so much configuration?
First server have less paranoid (if have any at all) security settings, second was configured by Real Admin. Client-certificate authorization is most bullet-proof method
Can someone give me a super-quick rundown of what each of these files
are and what purpose they serve? And why I need to install them
locally on my system?
foo.pem is your Personal S/MIME certificate, which used for client authentication, which you have storelocally and link with repo's server. atannon (I think) contain password for certificate privatekey, which will be asked (TBT) at first operation with repo (or with all, if you don't cache password)
Where should I install these files? The tech wanted me to put them in my ~/.subversion directory
For Windows, $HOME-dir (~ in Tux-world) is C:\Users\<Your Username>\ (Win7) or c:\Documents and Settings\<Your Username>\ (WinXP). You have to find inside this tree servers file (and remember it's location for future). In case of my XP (with TortoiseSVN only, no any Eclipse)
Directory of c:\Documents and Settings\Badger\Application Data\Subversion
30.06.2010 09:02 <DIR> auth
02.01.2012 19:11 6 712 config
30.06.2010 09:02 4 400 README.txt
30.06.2010 09:02 7 832 servers
"Provide authentication information", asking for SSL settings, and specifically a File and a Passphrase for the Client Certificate...would the files he sent me suffice for this?
Yes, pem-file is certificate in PKCS12-format, atannon (I hope) - contain password for it
I'm trying to think of a good solution for automating the deployment of my .NET website to the live server via FTP.
The problem with using a simple FTP deployment tool is that FTPing the files takes some time. If I FTP directly into the website application's folder, the website has to be taken down whilst I wait for the files to all be transferred. What I do instead is manually FTP to a seperate folder, then once the transfer is completed, manually copy and paste the files into the real website folder.
To automate this process I am faced with a number of challenges:
I don't want to FTP all the files - I only want to FTP those files that have been modified since the last deployment. So I need a program that can manage this.
The files should be FTPed to a seperate directory, then copy+pasted into the correct destination onces complete.
Correct security permissions need to be retained on the directories. If a directory is copied over, I need to be sure that the permissions will be retained (this could probably be solved by rerunning a script that applies the correct permissions).
So basically I think that the tool that I'm looking for would do a FTP sync via a temporary directory.
Are there any tools that can manage these requirements in a reliable way?
I would prefer to use rsync for this purpose. But seems you are using windows OS here, some more effort is needed, cygwin stuff or something alike.