SSH (FTP) and Web server concurrent file IO - eclipse

I have a server with Apache.
I have a problem with concurrent read-write operations on one file.
Assume I have index.html file in Apache DocRoot. In browser I can open a read it.
I'm using Eclipse IDE to modify files directly on server through SSH (or FTP).
After made some chages to the file I'm uploading it to server. Upload takes some time.
Problem is: if I try to view file in browser WHILE FILE IS UPLOADING uploading hangs and target file becomes blank. It looks like apache and SSH server both trying to access file, SSH to write, Apache to read. Collision breaks everything.
Any ideas how to avoid this? Maybe some SSH server config options or Apache module?

You need to lock the file first. Do you know what operating system and apache config you use, is it your own system?
Here is a quote from the apache server docs:
EnableMMAP Directive
Description:
Use memory-mapping to read files during delivery
Syntax:
EnableMMAP On|Off
Default:
EnableMMAP On
Context:
server config, virtual host, directory, .htaccess
Override:
FileInfo
Status:
Core
Module:
core
This directive controls whether the httpd may use memory-mapping if it needs to read the contents of a file during delivery. By default, when the handling of a request requires access to the data within a file -- for example, when delivering a server-parsed file using mod_include -- Apache httpd memory-maps the file if the OS supports it.
This memory-mapping sometimes yields a performance improvement. But in some environments, it is better to disable the memory-mapping to prevent operational problems:
•On some multiprocessor systems, memory-mapping can reduce the performance of the httpd.
•Deleting or truncating a file while httpd has it memory-mapped can cause httpd to crash with a segmentation fault.
For server configurations that are vulnerable to these problems, you should disable memory-mapping of delivered files by specifying:
EnableMMAP Off
For NFS mounted files, this feature may be disabled explicitly for the offending files by specifying:
EnableMMAP Off
as your server is crashing, I suspect that you have this option 'set' for the directory your file is in.
Add the
AllowMMAP Off
to the .htaccess file for your directory.

Related

Firebase hosting: The remote web server hosts what may be a publicly accessible .bash_history file

We host our website on firebase. We fail a security check due to the following reason:
The remote web server hosts publicly available files whose contents may be indicative of a typical bash history. Such files may contain sensitive information that should not be disclosed to the public.
The following .bash_history files are available on the remote server : - /.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /cgi-bin/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /scripts/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file.
The problem is that we don't have an easy way to get access to the hosting machine and delete these files.
Anybody knows how it can be solved?
If you are using Firebase Hosting, you should check the directory (usually public) that you are uploading via the firebase deploy command. Hosting serves only those files (plus a couple of auto-generated ones under the reserved __/ path for auto-configuration).
If you have a .bash_history, cgi-bin/.bash_history or scripts/.bash_history in that public directory, then it will be uploaded to and served by Hosting. There are no automatically served files with those name.
You can check your public directory, and update the list of files to ignore on the next deploy using the firebase.json file (see this doc). You can also download all the files that Firebase Hosting is serving for you using this script.

How to read a file on a remote server from openshift

I have an app (java, Spring boot) that runs in a container in openshift. The application needs to go to a third-party server to read the logs of another application. How can this be done? Can I mount the directory where the logs are stored to the container? Or do I need to use some Protocol to remotely access the file and read it?
A remote server is a normal Linux server. It runs an old application running as a jar. It writes logs to a local folder. An application that runs on a pod (with Linux) needs to read this file and parse it
There is a multiple way to do this.
If a continious access is needed :
A Watcher access with polling events ( WatchService API )
A Stream Buffer
File Observable with Java rx
Then creating an NFS storage could be a possible way with exposing the remote logs and make it as a persistant volume is better for this approach.
Else, if the access is based on pollling the logs at for example a certain time during the day then a solution consist of using an FTP solution like Apache Commons FTP Client or using an ssh client which have an SFTP implementation like JSch which is a native Java library.

How does the apache2 server know which config file to look into, if there are multiple config files for multiple websites in ubuntu

I want to host multiple websites using a single IP address i.e using name-based virtual hosting. In some of the blogs, it is given that we need to create separate config files for different websites and should enable all of them. but how does the apache server know which config file to look into? i.e if I have three config files named website1.conf,website2.conf,default.conf and if I type website2 in the chrome how does the server know which config file to look into?
The server is compiled to look for a single configuration file, which can be overridden by the -f command line flag. The configuration file can explicitly Include other configuration files or entire directories of configuration files.
At startup, the server parses the configuration. If it leads to other files, so be it. If those files have <virtualhost> directives, then the server will look at the directives within them to figure out what you've told it about routing requests.
apachectl -S can summarize what the server knows about virtual hosts.

Is it possible to have nginx stream a file for download that is currently being written to?

So I have 2 services running, one transcodes a file in realtime (ffmpeg), and another exposes it through http (nginx). The problem I currently have is that when ffmpeg begins transcoding, and I access the file through nginx, only a portion of the written bytes are downloaded.
Question, is it possible to config nginx in such a way as to stream the file currently being written to until writing finishes and I now have the complete file on my local computer?
Thank you
I don't believe Nginx by itself can do this. You would need an application (php, perl, python, whatever) that can monitor the transcoding progress and serve the request with chunked transfer encoding Essentially it would keeping the connection open to the client and deliver more data as it becomes available.
I had a very similar issue. I don't know how to begin streaming while ffmpeg is still transcoding the file, but here was my fix:
I had a php script that made the system calls (though you could write this in pretty much any language). The script had ffmpeg write to a temporary file. Before calling ffmpeg, it checked whether the temporary file existed to eliminate concurrency issues. If so, it waited until the real file existed.
Once the file was done converting, it renamed the temporary file to the real file and redirected the http request to the transcoded file.

Can I copy/paste htpasswd files to my new server?

I'm documenting the procedure for a full redeploy on my development server. Small staff, using Basic authentication (over SSL, of course) with an htpasswd file backend.
Is it safe to transfer the .htpasswd file as-is?
The Operating Systems will potentially differ, but the software on top (ie. Apache) will be the same.
It's safe to transfer the htpasswd file no matter what architectures you are on. It is a text file. The only case in which you might need to do some conversions is to deal with line endings if you were moving between Unix and Windows, but between Linux/Unix boxes, no problems.
Short answer: Yes you can.
That's all you need to know.
If you're using apache, then the .htpasswd file should be the same. Just make sure you aren't overriding the password file to use in your new server config. And make sure you set proper permissions on the file so that you don't get any additional unauthorized entries.
If you use the same apache version you shouldn't have any problems just copying the .htpasswd files: They just contain the hashes that people's passwords are being compared with and the hashing is always the same (or it wouldn't work on any machine ;))