How does the apache2 server know which config file to look into, if there are multiple config files for multiple websites in ubuntu - ubuntu-16.04

I want to host multiple websites using a single IP address i.e using name-based virtual hosting. In some of the blogs, it is given that we need to create separate config files for different websites and should enable all of them. but how does the apache server know which config file to look into? i.e if I have three config files named website1.conf,website2.conf,default.conf and if I type website2 in the chrome how does the server know which config file to look into?

The server is compiled to look for a single configuration file, which can be overridden by the -f command line flag. The configuration file can explicitly Include other configuration files or entire directories of configuration files.
At startup, the server parses the configuration. If it leads to other files, so be it. If those files have <virtualhost> directives, then the server will look at the directives within them to figure out what you've told it about routing requests.
apachectl -S can summarize what the server knows about virtual hosts.

Related

Firebase hosting: The remote web server hosts what may be a publicly accessible .bash_history file

We host our website on firebase. We fail a security check due to the following reason:
The remote web server hosts publicly available files whose contents may be indicative of a typical bash history. Such files may contain sensitive information that should not be disclosed to the public.
The following .bash_history files are available on the remote server : - /.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /cgi-bin/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /scripts/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file.
The problem is that we don't have an easy way to get access to the hosting machine and delete these files.
Anybody knows how it can be solved?
If you are using Firebase Hosting, you should check the directory (usually public) that you are uploading via the firebase deploy command. Hosting serves only those files (plus a couple of auto-generated ones under the reserved __/ path for auto-configuration).
If you have a .bash_history, cgi-bin/.bash_history or scripts/.bash_history in that public directory, then it will be uploaded to and served by Hosting. There are no automatically served files with those name.
You can check your public directory, and update the list of files to ignore on the next deploy using the firebase.json file (see this doc). You can also download all the files that Firebase Hosting is serving for you using this script.

Managing Multiple servers in an environment with Powershell DSC

I want to manage the servers in our staging pipeline with Powershell DSC (push model). The servers map to the environments as following
Development: 1 server
Test: 2 servers
UAT: 2 servers
Production: 2 servers
The server(s) within one environment do have the same configuration. But the configuration is different between the environments. I wanted to go with the push model because I do not have to setup a pull server.
Powershell DSC offers the option to manage the configuration via configuration data in a separate file But this comes with the caveat that you need to specify a node name that matches the respective server name. And that means, I need to copy the configuration data for each server in one environment. And when changing the configuration I need to remember that there is a second place where I need to update the configuration value.
Additionally, I do not really care about the server names. If the servers are exchanged tomorrow for new servers, the configuration should be just applied which is relevant to the environment.
What is the best practice approach to manage multiple servers within one environment with the same configuration?
Check the links, I think they cover scenerio
Using A Single DSC Configuration for Multiple Servers
enter link description here
DSC ConfigurationNames with multiple nodes
enter link description here
The mof file that gets produced does not contain the nodename inside it. So as long as you build a generic configuration, you can rename it after the fact at deploy time.
You can create one config for each environment with some generic name. Then enumerate the list of servers and make a copy of the config for each one with that servers name.
You can take it a step further. Have a share where you create a folder for each server that matches the server's name. Then copy the mof for that server into that folder with a name of localhost.mof. You can then run Start-DSCConfiguration -Path \\server\share\$env:computername from that machine as part of my deployment script.

SSH (FTP) and Web server concurrent file IO

I have a server with Apache.
I have a problem with concurrent read-write operations on one file.
Assume I have index.html file in Apache DocRoot. In browser I can open a read it.
I'm using Eclipse IDE to modify files directly on server through SSH (or FTP).
After made some chages to the file I'm uploading it to server. Upload takes some time.
Problem is: if I try to view file in browser WHILE FILE IS UPLOADING uploading hangs and target file becomes blank. It looks like apache and SSH server both trying to access file, SSH to write, Apache to read. Collision breaks everything.
Any ideas how to avoid this? Maybe some SSH server config options or Apache module?
You need to lock the file first. Do you know what operating system and apache config you use, is it your own system?
Here is a quote from the apache server docs:
EnableMMAP Directive
Description:
Use memory-mapping to read files during delivery
Syntax:
EnableMMAP On|Off
Default:
EnableMMAP On
Context:
server config, virtual host, directory, .htaccess
Override:
FileInfo
Status:
Core
Module:
core
This directive controls whether the httpd may use memory-mapping if it needs to read the contents of a file during delivery. By default, when the handling of a request requires access to the data within a file -- for example, when delivering a server-parsed file using mod_include -- Apache httpd memory-maps the file if the OS supports it.
This memory-mapping sometimes yields a performance improvement. But in some environments, it is better to disable the memory-mapping to prevent operational problems:
•On some multiprocessor systems, memory-mapping can reduce the performance of the httpd.
•Deleting or truncating a file while httpd has it memory-mapped can cause httpd to crash with a segmentation fault.
For server configurations that are vulnerable to these problems, you should disable memory-mapping of delivered files by specifying:
EnableMMAP Off
For NFS mounted files, this feature may be disabled explicitly for the offending files by specifying:
EnableMMAP Off
as your server is crashing, I suspect that you have this option 'set' for the directory your file is in.
Add the
AllowMMAP Off
to the .htaccess file for your directory.

What is the reason for using of Service Binder while running multiple JBoss (JBoss 4.2)

I found couple of tutorials how to run multiple instances of JBoss on the same machine.
All of them mention uncommenting Service Binder and having separate service-binding.xml files for each server.
The question is why it's done like that? Is there any reason except adding additional layer of indirection?
It looks the same could be done by modification of ports in jboss-service.xml for each server. The only restriction would be that there won't be easy way to switch which instance of JBoss uses which set of ports.
You are right with modifying the ports in jboss-service.xml. This is the straightforward and genuine way to change the ports.
Unfortunately, ports are not only defined in that file, but also in other places like jboss-web's configuration etc.
Catching all those places can be error prone.
So the idea was to have a central file (service-binding.xml) that lives in the root of a server installation. You basically copy the 'default' config to server1, server2 etc and then via command line pass in the server name when starting so that the correct port-offset for all of the services is taken from service-bindings.xml and applied to the resulting runtime configuration.
JBossAS 7 takes this concept one step further to the ServiceBindingGroups, where the base ports are defined on a domain level and then per server you pick a basic group + just a port offset by name, so that there is even less work needed than in as4

pylons/paste config files in fastcgi (deployment)

I'm running a pylons app using fastcgi and apache2. There are two versions (different revisions from my svn repo), one for staging and one for production. I'd like them to use different paste config files.
Right now, my dispatch.fcgi inside htdocs in the pylons app just uses one config file (so both stage and live use the same configuration). I'd like to be able to have debugging enabled on the stage server but not on the live server, for example. Any suggestions?
One approach would be to have more than one dispatch.fcgi prepared (referencing different INI files), then run a script on deployment to copy the correct one into the active position.
Another approach would be to have two .fcgi files, then use an IfDefine directive to select the proper rules in your main httpd.conf.
In other words, on the staging server, you start httpd with httpd -D staging, then put the staging config inside <IfDefine staging></IfDefine> and the other config inside <IfDefine !staging></IfDefine>
The limitation of this approach is that since IfDefine is binary, going past two options while still having a "default" option requires a bunch of extra lines. It's not the end of the world, and if you require a parameter to be given on all deployments it stays clean.
Still, I would use option #1.