Securing running mongo servers - mongodb

I have replication setup for the mongo without security and authorization. But now I wish to add security by the means of authorization. The setup is like this
4 boxes : A,B,C,D, A being the master and rest the slaves. They all are running in no authentication mode.
Now I wish to add -keyFile option to all and thus ensure login mode for it.
The issue is that I do not want any Down time. I wish to add the security seamlessly without any impact to the end-user site. What step should I follow?

You are going to have to restart every instance at some point to enable security and authentication. You cannot run mixed (authentication and no-authentication), so making the switch is not possible without downtime.
All you can do is minimize that downtime by adding the admin user first and making sure your keyfiles are consistent and in place for the restarts as described here:
http://www.mongodb.org/display/DOCS/Security+and+Authentication

Related

How to use multiple authentication plugins in the same service in Kong

I am looking to use Cypress for end to end testing for some kubernetes applications. Typically, I access these applications via OIDC through kong, however cypress doesn't support this, but does support key-auth via an API key. Is there a way of setting up the service so that I can use both of these simultaneously?
I think you cannot use more than one authentication plugin in an XOR scenario. This would only work for AND as long as the plugins do not use the same headers.
I also faced this problem and I solved it by setting up one service (pointing to the backend) and multiple routes (one for normal traffic, one for test traffic). You then can activate different plugins on each route instead of sticking it to the service.
The only downside is the slightly different base path you use for testing, but I think this is less problematic than the downside of testing with a different way of authentication.

Deploy code to multiple production servers under the load balancer without continuous deployments

I am the only developer (full-stack) in my company I have too much work other than automating the deployments as of now. In the future, we may hire a DevOps guy for the same.
Problem: We have 3 servers under Load Balancer. I don't want to block 2nd & 3rd servers till the 1st server updated and repeat the same with 2nd & 3rd because there might be huge traffic for one server initially and may fail at some specif time before other servers go live.
Server 1
User's ----> Load Balancer ----> Server 2 -----> Database
Server 3
Personal Opinion: Is there a way where we can pull the code by writing any scripts in the Load Balancer. I can replace the traditional Digital Ocean load balancer with Nginx Server making it a reverse proxy.
NOTE: I know there are plenty of other questions asked in Stack
Overflow on the same but none of them solves my queries.
Solutions I know
GIT Hooks - I know somewhat about GIT Hooks but don't want to use it because if I commit to master branch by mistake then it must not get sync to my production and create havoc in the live server and live users.
Open multiple tabs of servers and do it manually (Current Scenario). Believe me its pain in the ass :)
Any suggestions or redirects to the solutions will be really helpful for me. Thanks in advance.
One of the solutions is to write ansible playbook for this. With Ansible, you can specify to run it per one host at the time and also as the last step you can include verification check that checks if your application responds with response code 200 or it can query some endpoint that indicates the status of your application. If the check fails, Ansible will stop the execution. For example, in your case, Server1 deploys fine, but on server2 it fails. The playbook will stop and you will have servers 1 and 3 running.
I have done it myself. Works fine in environments without continuous deployments.
Here is one example

Sophos UTM VPN not accessible

I used the Sophos UTM 9.510 ha_standalone Cloudformation template (https://github.com/sophos-iaas/aws-cf-templates/blob/master/utm/9.510/standalone.template) and used defaults when possible. I did not use an existing ElasticIP, so it created it's own at (scrubbed) 50.12.12.123.
I gave a hostname at (for example) vpn.example.com and after creation, I created an A record for vpn.example.com to point to 50.12.12.123.
I don't have a license and just pay hourly for the AMI.
I understand that I should be able to hit https://vpn.example.com:4444 or https://50.12.12.123:4444 to see the admin panel. However, it times out and doesn't load anything.
When I deployed the stack, I got an email at the admin email I provided and it said REST daemon not running - restarted. I assume it restarted fine, since I have received no new emails, and the EC2 instance is running.
Has anyone else experienced this? Is there a step I'm missing? Aside from creating the Route53 record, I thought the Cloudformation Template should just work right out of the box.
The default security groups blocked traffic. I modified one of them to accept all traffic and the dashboard became accessible. I will now refine access further.

AWS deployment without using SSH

I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.

OpenLdap redirect on write

I am currently trying to setup a redirect on write for an installation of OpenLdap 2.2.
I have two instances running. One is configured to be read-only (only read access, database specified as read-only) and has redirect configured to point to the second instance. The second instance is configured to allow for the desired write permissions.
When I attempt a modify on the first instance it fails as expected but does not send back the referral. Am I missing a piece of the configuration? Am I even on the right path? Any guidance would be greatly appreciated. Thanks.
In the database section of you slapd.conf do you add the redirection like this ? :
updateref "ldap://master-host:port/"
So, it turns out the best way to do this is to go ahead and set up replication using slurpd and point all requests at the slave instance. Unfortunately you can't set up the master and slave on the same host (for obvious reasons, but still), so I had to spin up a second VM to get this going.
Honestly, if I was not trying to replicate a redirect problem it wouldn't be worth it, but I have to duplicate a production issue.
For more information on slapd and specifically slurpd, the OpenLDAP documentation is actually crazy helpful: slurpd config for OpenLDAP 2.2