I'm looking for a data at rest solution for our setup.
Our application runs on our clients' machines, set up by their IT guys (however, they don't possess any credentials), and located on-premise. We log in via SSH. The machines are meant to stay up. We're storing sensitive information and would need to encrypt it to meet data-at-rest requirements. We're using Ubuntu 18.04+ and PostgreSQL.
I've looked into different solutions and gathered some information from several related previously asked questions:
Full disk encryption - since their IT is not really available to us, going in that direction might be problematic, as it would require performing more steps on their side. Also, if (when) the server ever gets rebooted, we would need to log in via SSH to enter the passphrase or use some kind of network-bound encryption, which again requires additional set up and the additional resources might not even be available to us.
File-based encryption - use something like eCryptfs and store the PostgreSQL data directory in an encrypted file system. This is currently the only solution I found that solves most of the issues; however, there might be other directories that would need encryption, and I'm not certain if they can be encrypted in that method (like /tmp). Once rebooted, the file system wouldn't be mounted automatically, and we would need to mount it manually. I don't see how we can solve this without, again, network-bound encryption. eCryptfs also let the user enter whatever configurations and passphrase they want upon every mounting, even if they don't match the previously used settings, which I think is prone to files getting corrupted. Writing a program that intercepts the mounting and validates the passphrase might be a possible solution to this. Handling problems like hanging processes when the FS isn't mounted is also okay for now, but overall this solution doesn't scale nicely.
Column-based encryption, client-side encryption, etc - doesn't work for our setup. We want to be able to query the data over SSH. The client is stored on the same machine with the data. Using something like PGP keys would mean the data is effectively unencrypted.
We don't use cloud services of sorts.
Maybe we need a different setup, or there are other solutions I'm not aware of. I'm really new to this subject and in the Stack Overflow community. The solutions I've found over the internet seem sparse and dated, and I'm not sure they're still relevant.
Related
I am using PowerShell to manage Autodesk installs, many of which depend on .NET, and some of which install services, which they then try to start, and if the required .NET isn't available that install stalls with a dialog that requires user action, despite the fact that the install was run silently. Because Autodesk are morons.
That said, I CAN install .NET 4.8 with PowerShell, but because PowerShell is dependent on .NET, that will complete with exit code 3010, Reboot Required.
So that leaves me with the option of either managing .NET separately, or triggering that reboot and continuing the Autodesk installs in a state that will actually succeed.
The former has always been a viable option in office environments, where I can use Group Policy or SCCM or the like, then use my tool for the Autodesk stuff that is not well handled by other approaches. But that falls apart when you need to support the Work From Home scenario, which is becoming a major part of AEC practice. Not to mention the fact that many/most even large AEC firms don't have internal GP or SCCM expertise, and more and more firm management is choosing to outsource IT support, all to often to low cost glorified help desk outfits with even less GP/SCCM knowledge. So, I am looking for a solution that fits these criteria.
1: Needs to be secure.
2: Needs to support access to network resources where the install assets are located, which have limited permissions and thus require credentials to access.
3: Needs to support remote initiation of some sort, PowerShell remote jobs, PowerShell remoting to create a scheduled task, etc.
I know you can trigger a script to run at boot in System context, but my understanding is that because system context isn't an actual user you don't have access to network resources in that case. And that would only really be viable if I could easily change the logon screen to make VERY clear to users that installs are underway and to not logon until they are complete and the logon screen is back to normal. Which I think is really not easily doable because Microsoft makes it near impossible to make temporary changes/messaging on the logon screen.
I also know I can do a one time request for credentials on the machine, and save those credentials as a secure file. From then on I can access those credentials so long as I am logged in as the same user. But that then suggests rebooting with automatic logon as a specific user. And so far as I can tell, doing that requires a clear text password in the registry. Once I have credentials as a secure file, is there any way to trigger a reboot and one time automatic logon using those secure credentials? Or is any automatic reboot and logon always a less than secure option?
EDIT: I did just find this that seems to suggest a way to use HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon without using a plain text DefaultPassword. The challenge is figuring out how to do this in PowerShell when you don't know C#. Hopefully someone can verify this is a viable approach before I invest too much time in trying to implement it for testing. :)
And, on a related note, everything I have read about remote PowerShell jobs and the Second Hop Problem suggests the only "real" solution is to use CredSSP, which is itself innately insecure. But it is also a lot of old information, predating Windows 10 for the most part, and I wonder if that is STILL true? Or perhaps was never true, since none of the authors claiming CredSSP to be insecure explained in detail WHY it was insecure, which is to me a red flag that maybe someone is just complaining to get views.
I have been struggling to find a good architecture, or even any nomenclature for what I'm trying to do here. I'm looking for nomenclature so I can have a starting point for research. And I want the same for architecture, but I'll take whatever anyone wants to help with.
What I'm trying to do & learn about
In a nutshell I need my clients to exchange pub keys, and other security data such as ACL ID's, name etc.
Current architectural attempts
I'm currently using my server as a via point, mainly because I can't see any other way of doing this securely and this method uses many layers of security. I also don't know of any other method of going client app to app securely.
A client creates group and sends pub key to server, opens a live query to receive other users data. Other user (with secrets passed to user) queries server for pub key, then sends own data to admin user via server. Admin then sends remainder of own data. I'm leaving out trivial security details but this is the gist of what I'm doing.
Issues
This is really just logical back and forth, but I honestly don't know what I'm doing. I don't even know if what I'm doing is right or the best way, I've also got a crazy infinity loop I'm trying to solve.
I'm looking for some terminology, description and/or architectural pointers, I'll take any input I can get.
Forget terminology, nomenclature and architecture.
Define the problem you are trying to solve in a simple sentence.
Break down the issues into smaller pieces (bite size).
You send A data to server
What happens to the A data
Any feedback or acknowledgement from the target host?
What sort of application is this? Web, Mobile, traditional client/server?
The most elegant solutions are usually the simplest ones.
Sit down and determine whether you have a problem to solve in the first place.
Im considering taking web server from China to reduce site loading times from China/China users. Problem is, how to sync/keep same data between two sites? When editing content in the site it should update these changes to site in China server.
Server is running Linux, Apache and MySQL. Website is using WordPress.
FYI I'm already using CDN and site loading speed is still too long from China.
Basically your solution would need to...
Copy the entire contents of your http'd directory from the main server to the Chinese server.
Copy the entire contents of your MySQL database from the main server to the Chinese server.
Perform these tasks at a regular interval without manual intervention.
I can guide you to references that will help with each task and sometimes can show you a quick example. However, if you want to get it to work and especially if you want to optimize the process, you're going to have to look through the references yourself.
If I didn't do it this way this answer would get even more horrendously long that it already is.
Before we start you should remember...
Thing 0 - Please Try Not to be Intimidated by the Length of this Answer
I know I've written a lot, perhaps more than I should have, but I guarantee you are capable of implementing this in no more than a day. I have tried to be thorough but that does not mean that what I'm describing is particularly complicated.
Thing 1 - Shutdown your Chinese Server During Transfer
This transfer of data is going to make your Chinese server unusable while it's in progress, as you might have guessed. You need to make sure that you're Chinese server is not operational during the transfer. Otherwise the server might have only partial data available which could cause problems for both client and server, particularly in relation to MySQL.
Thing 2 - Use Compression as much as You Can
As time consuming as compression and decompression can be for large amounts of data, believe me it is nothing compared to the time you will waste sending the uncompressed data to China. Network usage, not processor time, is really going to be the limiting factor in getting the transfer done quickly. Try to send compressed files whenever possible.
Thing 3 - Try to Use Checksums
Sending all your data, particularly in compressed format, will leave it vulnerable to corruption in transit. Whenever you send a file I encourage you to use some kind of checksum on the data to verify that it has not been corrupted. For brevity I will not be showing you how to do this but I'm sure you're smart enough to figure out how to pepper in some verification.
In case you're not familiar with checksums, the Wikipedia article about them is pretty straight forward. The most commonly used are the MD5 and the SHA-1, but both of those are somewhat collision prone. I would recommend the SHA-2 (also called SHA-256/512) or the very new SHA-3.
Copying your Http'd Directory to the Chinese Server
As far as I know (and I could be wrong) there is no built in way to transfer files from one Apache server to another...so you're going to have to write your own script for this.
You're also going to need to have two separate scripts: one for the main server and one for the Chinese server. Here's a breakdown of what each script needs to do.
On your main server...
Log in as you're Apache server's user. (Reference for switching users.)
zip/gzip/tar.gz your http'd directory's contents. (Reference for zip. Reference for gzip. Reference for tar.)
scp (secure copy) the compressed file to your Chinese server. Make sure to copy it to the username that Apache runs under. (Reference for scp.)
Delete the compressed file.
Initiate the Chinese server's script (this will be discussed later).
You will likely be using a shell script for all of this, so I hope you're familiar with the terminal. A simple example would look like this.
#!/bin/sh
## First I'll define some variables to explain this better.
APACHE_USER="whatever your Apache server's username is (usually it's www-data)";
WWW_DIR="your http'd directory relative to ~ (usually it's /var/www)";
CHINA_HOST="the host name/IP address of your Chinese server"
CHINA_USER="Apache's username on the Chinese server";
CHINA_PWD="Apache's user password on the Chinese server";
CHINA_HOME="the home directory of the Apache user on your Chinese server";
## Now to the real scripting. I will be using zip for compression.
su - "$APACHE_USER";
zip -r copy.zip "$WWW_DIR";
scp copy.zip "$CHINA_USER#$CHINA_HOST:$CHINA_HOME" < echo $CHINA_PWD;
rm copy.zip;
## Then you initiate the next step of the process.
## Like I said this will be covered later.
On your Chinese server...
Log in as the Apache user.
Delete the content of the http'd directory (probably /var/www relative to ~).
Decompress the scp'd file (this will change depending on how you compressed it).
Copy the decompressed directory to the http'd directory (this step is unnecessary if you choose to compress with zip).
Deleted the compressed, scp'd file.
Notify main server to continue next step (again, will be discussed later).
This is pretty straight forward and I don't think you need another example for this part.
Copying the MySQL Database Contents
You can find a good reference for how to do this in this article from the MySQL website. Basically copying database contents is a built in feature. Try to make use of the compression options!
Performing these Tasks at Regular Intervals without Manual Intervention
Ok this is where things get kind of complicated.
The first thing you need to know is how to schedule tasks at regular intervals on Linux. This is done with a command line tool called crontab. You can see good examples for setting up cron jobs in this article, and the full crontab documentation here.
However what will take more skill than just scheduling the job at regular intervals will be synchronizing the data transfer. If you simply set one server to send data at a certain time and the other to receive it at a certain time, you will get many bugs. Be sure of that.
My recommendation would be to create a socket in the Chinese server that listens for instructions from the main server.
This can be done in a variety of languages. Because you're using Linux I would recommend doing this in C, but it can be done in almost any language including Bash.
A full example would be too much but basically this will be the flow of what you have to do.
Socket in China listens for connections.
Cron job in main server connects to China socket.
Main server authenticates itself.
Chinese server stops Apache, stops accepting requests.
Chinese server acknowledges authentication approved.
Main server scp's website contents to Chinese server.
Main server tells Chinese server that scp is complete.
Chinese server replaces Apache's http'd directory's contents with the data that has been scp'd.
Chinese server announces success to main server.
Main server copies MySQL data.
Main server tells Chinese server process is complete.
Chinese server resumes Apache service.
Chinese server notify's main server that service is resumed.
Socket is closed.
Chinese server goes back to listening for connection from main server.
I hope this helps!
I'm developing an app that's pretty simple, and the important part of it is the content, which consists of lots of info that has been gathered over many years. I want to format it in a nice way to show to the user.
When the user downloads the app and first loads it, it goes to the server to get the whole database into the phone. Then, he can see the important items, and sort/filter through them. To avoid somebody taking my database, I'll use a SSL connection. I know if they want they could use the app to see every piece of content one by one, but there's nothing to do about that.
The thing is: I have the data in the cloud (mine). I can securely download it using an SSL connection (any other ideas to secure the transfer?). When I get it here, I'll save it in a db (Core Data is the obvious choice).
How can I secure the data in the internal database, so if the app is hacked, someone cannot access the db? I would put it in the keychain but it's a rather large db for that and it's not that important. (It's not sensible info, just info I don't want anybody to get massively.)
The other thing I could do is to never store anything in the device and have the user always making calls to the cloud, but I think this would be too time consuming. And just give him the option to save their favorite picks to the device. But that's too time consuming and there is the sync issue.
This is a reference I looked up about a similar issue, without the part I'm asking answered:
How to encrypt iPhone upload and download of info?
Basically, the only choice is to use SqlCipher. Of course, you have to port it to iPhone yourself (unless someone else has posted a port since last I looked). But it's not an insurmountable task.
Of course, even with SqlCipher you have the challenge of storing the key somehow. There's no really secure way to do this -- you have to use some form of "security by obscurity".
Why not just have some private key info stored in the code, and then when you want to download the database just have it query the server with the key? That way you wan't need to worry about SSL or encryption in the downloading part. In regards to storing it I agree with Hot Licks, SqlCipher appears to be the best and only option. However watch out for encryption, as you will have to declare it to apple and get all kinds of export permits (http://stackoverflow.com/questions/2135081/does-my-application-contain-encryption).
Hope this helps,
Jonathan
Our shop has developed a few WEB/SMS/DB solution for a dozen client installations. The applications have some real-time performance requirements, and are just good enough to function properly. The problem is that the clients (owners of the production servers) are using the same server/database for customizations that are causing problems with the performance of the applications that we created and deployed.
A few examples of clients' customizations:
Adding large tables with many text datatypes for the columns that get cast to other data types in the queries
No primary keys, indexes, or FK constraints
Use of external scripts that use count(*) from table where id = x, in a loop from the script, to determine how to construct more queries later in the same script. (no bulk actions that the planner can optimize or just do everything in a single pass)
All new code files on the server are created/owned by root, with 0777 permissions
The clients don't take suggestions/criticism well. If we just go ahead and try to port/change the scripts ourselves, the old code can come back, clobbering any changes that we make! Or with out limited knowledge of their use cases, we break functionality while trying to optimize their changes.
My question is this: how can we limit the resources to queries/applications other that what we create and deploy? Are there any pragmatic options in scenarios like this? We prided ourselves in having an OSS solution, but it seems that it's become a liability.
We use PG 8.3 running on a range on Linux Distos. The clients prefer php, but shell scripts, perl, python, and plpgsql are all used on the system in one form or another.
This problem started about two minutes after the first client was given full access to the first computer, and it hasn't gone away since. Anytime someone whose priorities are getting business oriented work done quickly they will be sloppy about it and screw up things for everyone. That's just how things work, because proper design and implementation are harder than cheap hacks. You're not going to solve this problem, all you can do is figure out how to make it easier for the client to work with you than against you. If you do it right, it will look like excellent service rather than nagging.
First off, the database side. There's now way to control query resources in PostgreSQL. The main difficulty is that tools like "nice" control CPU usage, but if the database doesn't fit in RAM it may very well be I/O usage that is killing you. See this developer message summarizing the issues here.
Now, if in fact it's CPU the clients are burning through, you can use two techniques to improve that situation:
Install a C function that changes the process priority (example 1, example 2) and make sure whenever they run something it gets called first (maybe put it into their psql config file, there are other ways).
Write a script that looks for postmaster processes spawned by their userid and renice them, make it run often in cron or as a daemon.
It sounds like your problem isn't the particular query processes they're running, but rather other modifications they're making to the larger structure. There's only one way to cope with that: you have to treat the client like they're an intruder and use the approaches of that portion of the computer security field to detect when they screw things up. Seriously! Install an intrusion detection system like Tripwire on the server (there are better tools, that's just the classic example), and have it alert you when they touch anything. New file that's 0777? Should jump right out of a proper IDS report.
On the database side, you can't directly detect the database being modified usefully. You should do a pg_dump of the schema every day into a file (pg_dumpall -g and pg_dump -s, then diff that against the last one you delivered and again alert you when it's changed. If you manage that this well, the contact with the client turns into "we noticed you changed on the server...what is it you're trying to accomplish with that?" which makes you look like you're really paying attention to them. That can turn into a sales opportunity, and they may stop fiddling with things as much just knowing you're going to catch it immediately.
The other thing you should start doing immediately is install as much version control software as you can on each client box. You should be able to login to each system, run the appropriate status/diff tool for the install, and see what's changed. Get that mailed to you regularly too. Again, this works best if combined with something that dumps the schema as a component to what it manages. Not enough people use serious version control approaches on the code that lives in the database.
That's the main set of technical approaches useful here. The rest of what you've got is a classic consulting client management problem that's far more of a people problem than a computer one. Cheer up, it could be worse--FSM help you if you give them ODBC access and they discover they can write their own queries in Access or something simple like that.