Intranet website with Joomla? - content-management-system

my company wants to set up a small intranet portal on LAN. We are about 100 users at max. I am thinking about Joomla on a windows server environment with XAMPP.
Just to be safe, is XAMPP efficient for serving about 50 to 100 users ? Does it have some connection limits ? Also how about using it as a webserver for a small intranet portal.
Have your say guys.

XAMPP is "just" a collection of established applications for serving web pages. The underlaying apache can handle far more that the expected 100 users.
I haven't tried it yet, but think that maybe even the out-of-the-box configuration might be sufficient - if not you can always modify the underlaying Apache and/or MySQL database according to your needs.

XAMPP is just a handy single-click installer for Apache/MySQL/PHP which is all you need to run Joomla. This stack powers some of the largest websites on the net, so I don't think you'll run into any problems there. The specs of the server are what you should be most concerned about, but any low-range server should be able to handle that capacity without blinking.
Just be aware that the default settings used by XAMPP are specifically designed for developers working on their own local machines: there's no root password for MySQL, permissions are very relaxed, etc. Take some time to go through the config after you set it up.

You could also look at WAMP, depending on your requirements. Similar sort of thing but with the same issues that nickf stated.

Related

How to setup postgresql replication when installed on windows server 2016 servers

I realize this is a basic question, and not very specific, but I don't know where to go for this. I am being asked to deploy two web servers onto windows server 2016 onto two hosts for load balancing. The database backend for the two web servers is PostgreSQL with POSTGIS.
I know how to install PostgresSQL.
I also know how to get them going for each host and attach them to their respective web servers.
What I don't know how to do is set them up for multi-master replication. On windows. All solutions I have found so far are Linux-based.
I'm looking for options and ideas.
Thank you.
I've personally never setup replication in a windows environment, working with PostgreSQL you're almost guaranteed some form of Linux environment. That being said, I did find a blog that details how to setup replication between windows servers (read-only secondary). This may not be a full solution for you but hopefully it will help.
https://www.sigterritoires.fr/index.php/en/replicating-a-postgresql-database-in-a-windows-workstation/

Can create a remote server with MongoDB? How?

My question, to be more clear, it is to create a server with mongodb on a cloud hosting (for example) and access it through another server.
Example:
I have a mobile app.
I hosted my mongoDB a cloud hosting (ubuntu).
I want to connect my app to the db on the server cloud.
Is it possible? How?
I'm joining this learning and my question was exactly MongoDB to create a server in a way that I could access it remotely.
Out of "localhost"? Different from all the tutorials I've seen.
From what you are describing, I think you want to implement a 2-Tier-Architecture. For practically all use cases, don't do it!
It's definitely possible, yes. You can open up the MongoDB port in your firewall. Let's say your computer has a fixed IP or a fixed name like mymongo.example.com. You can then connect to mongodb://mymongo.example.com:27017 (if you use the default port). But beware:
Security You need to make sure that clients can only perform those operations that you want to allow, e.g. using MongoDB integrated authentication, otherwise some random script kiddie will steal you database, delete it, or fill it with random data. Many servers, even if they don't host a well-known service, get attacked thousands of times per day. Also, you probably want to encrypt the connection so people can't spy on the connection. And to make it all worse, you will have to store the database credentials in your client app, which is practically impossible to do in a truly secure way.
Software architecture There is a ton of arguments against this architecture, but 1) alone should be enough. You never want to couple your client to the database, be it because of data migrations, software updates, security considerations, etc.
3-Tier
So what to do instead? Use a 3-Tier-Architecture: Host a server of some kind on mymongo.example.com that then connects to the database. That server could be implemented in nginx/node.js, iis/asp.net, apache/php, or whatever. It could even be a plain old C application (like many game servers).
The mongodb can still reside on yet a different machine, but when you use a server, the database credentials are only known to the server, not to all the clients.
Yes, it is possible. You would connect to MongoDB using the ip address of your host, or preferably using it's fully qualified hostname rather than "localhost". If you do that, you should secure your MongoDB installation otherwise anyone would be able to connect to your MongoDB instance. At an absolute minimum, enable MongoDB authentication. You should read up on MongoDB Security.
For a mobile application, you would probably have some sort of application server in front of MongoDB, e.g. your mobile application would not be connecting to MongoDB directly. In that case only your application server would be connecting to MongoDB, and you would secure MongoDB accordingly.

Solution for Multidomain Email?

I'm using a custom root server to handle multiple domains on one IP. The basic OS is Debian and the WWW is done with: Nginx+MariaDB.
Now I'm trying to install any working non MySQL based Email service on it. I've watched several tutorials and googled the whole web for a solution.
My last attempt was to work with Postfix and Dovecot. The emails was kind of identified but getting the error:
<domain.org/info#mail.domain.com> (expanded from <info#domain.org>):
mail for mail.domain.com loops back to myself
Is there a step by step explanation for multidomain mail alias setting that is not running on any MySQL?
Do I need to run my virtual emails on MySQL?
Any Cpanel or Plesk like interface that could handle virtual Email aliases on non MySQL basis?
postfix can use mysql as a backend, but it's not required. Usually you can find tutorials on the net just using the db files.
no, you don't have to.
No idea. I usually do that stuff directly in the files or with a database backend.
This question might be better suited for serverfault, but it's pretty generic as it stands.

Cluster/Load Balancing software that displays multiple hard-drives as one?

Does anyone know of a clustering/load balancing software(free or commercial) that once setup, only requires you to login at one place and all hard drives are mounted together as one?
For example, currently i have 1 server which i access by going to www.myurl.com/cpanel and one hard-drive is displayed and i upload all my website files there.
If I had 100 linux or windows servers connected together using load balancers and wanted to run them as a cluster, is there software where i can just go to www.myurl.com/cpanel and once i login i will not see more than 1 hard drive, but instead i will see the total space of ALL hard-drives, so i can just upload 1 file and it will automatically be uploaded to all the hard drives?
If a software like this is not currently available, do you think it'll be possible to program something like this? Or is there a software where you put a file on your website, and website visitors download it and once they run the file and are connected to the internet, their internet connection becomes part of your web server, so when people access your website, some data and cpu usage comes from your web server and some data and cpu usage from users who downloaded the file?
You're talking about 2 different things in your question - mirrored drives and distributed storage.
There are SAN (Storage Area Network) products from EMC for example that can do things that blow your mind with the way storage is handled. There are other local disk to SAN technologies like iSCSI also.
For what you want to do, you want to mirror a folder across 100 servers. I think *nix servers have a RSYNC technology, and Windows Server 2003 R2 has File Replication Services. I do know that FRS (part of Distributed File System technology) will do exactly what you want to do and it comes out of the box in Windows Server 2003 (R2 only) and 2008.
Look at Direct Attached Storage and Storage Area Network.
From what I can tell, the DAS is cheaper as it runs over existing network infrastructure, while a SAN essentially has its own dedicated fiber network that is parallel to the regular network.
actually that wikipedia link is old.. see dell's DAS options for a better example.

EC2: can I host an http server there?

Does anyone have experience deploying GWT apps to EC2?
If I were to install tomcat or apache on a ec2 instance, could I have users connect directly to a url pointing there?
Would that be cost effective, or would java hosting services be best?
Is there any downside to hosting the edge HTTP server on a regular hosting service and have that direct requests to EC2? Performance ever an issue here?
Other answers are correct but I just wanted to share the fact that we are are developing a product that is 100% EC2/S3 based and also have a pure GWT front end.
We use maven2 for builds and the excellent gwt-maven plugin. This makes it easy to produce a WAR package of our web application as output. We use Jetty but Tomcat would work just as well.
We have pound (a http accelerator/load balancer) running on the VM listening for http & https, which then forwards to requests to lighttpd (static) or jetty (app). This also simplifies SSL certificates because pound handles SSL. I've found Java servers have always been a pain to configure with SSL certs.
Yes, you can host pretty much whatever you want, as you effectively have a dedicated Linux machine at your command.
As I last recall, the basic rate for an EC2 instance, on their "low end box" worked out to around $75/month, so you can use that as a benchmark against other vendors. That also assumed that the machine is up 24x7 (since you pay for it by the hour).
The major downside of an EC2 instance is simply that it can "go away" at any time, and when it does, any data written to your instance will "go away" as well.
That means you need to set it up so that you can readily restart the server, but also you need to offline any data that you generate and wish to keep (either to one of Amazons other services, like S3, or to some other external service). That will incur some extra costs depending on volume.
Finally, you will also be billed for any traffic to the service.
The thing to compare it against is another "Virtual Server" from some other vendor. There is a lot of interesting things that can be done with EC2, but it may well be easier to go with a dedicated Virtual hosting service if you're just using a single machine.
Others have given good answers. I would have to add that you need to spend programmer time getting to know EC2's quirks and addressing them (e.g. with EBS). It's not completely trivial, and though it is useful knowledge to have and may be worth it for that reason alone, if you want to get up and running quickly with just a few servers, you should probably look at other hosted options.
On the other hand, if you plan to scale up massively enough (eventually hosting many servers on EC2) then I would highly recommend it. You have to architect a few things, but you need to do that anyways. The flexibility of on-demand computing, and the generally low price, makes this a killer platform once you reach a certain scale of operation.
You definitely can host an http server in EC2, but you need to take into consideration the following:
As mentioned before the cost can be much higher than alternative hosting solutions
Your instance (the machine you've started in EC2) can go off unexpectedly. There is no guarantee from Amazon for 24x7 availability. This mean that the data you've stored in local storage will be lost and when you've start a new instance, it will get a new IP.
To successfully host a server in EC2, you therefore need to employ some other services from Amazon. You need Elastic IP, so that you can circumvent the new IP address problem. You can also use Elastic Block Storage. This is a service that will allow you to mount in your machine a disk, that will not go away when your instance is lost.