The medium-sized internal-only website that I came in to support has about 1/2 the *.cgi files without 'taint' mode. Do I need 'taint' mode for an internal website?
Do you trust the internal users? If not, then yes.
Let's say you do trust your internal users and don't need taint at the moment. You could consider leaving taint ON in any existing scripts, if only to train yourself in how to use taint. It's not as bad as it feels at first, kind of like walking on coals. Gets better.
I can say that I've had more than one 'internal' website suddenly (requirements changed) become customer facing, exposed to the internet, and needing better security.
Another thing to keep in mind is that internal users are sometimes the most disgruntled and most likely to want to hurt your organization is some petty way.
Related
Hi everyone and sorry for my bad English.
I'm learning penetration testing.
After reconnaissance and scanning of my target, I have enough information to pass to next phase.
Some info I have is open ports with related running services, names of the services, service's versions, operative system of the device, firewalls used, etc.)
I launched the mfs console.
I should find the correct exploit and payload, based on the information collected to gain access. I've read the Metasploit Unleashed guide on offensive-security. I've learned the Metasploit Fundamentals and the use of mfs console.
But I don't understand the way to start all of this. Assuming that my target has 20 ports open, I want test the vulnerability using an exploit payload that do not require user interaction. The possibilities of which exploit and payloads to use are now reduced, but are always too. Searching and testing all exploit and payloads for each ports isn't good! So, if i don't know the vulnerability of the target, how do I proceed?
I would like to be aware of what I do. and do not try without understanding.
Couple of things:
We have a stack exchange for security! Check it out at https://security.stackexchange.com/
For an answer: you want to look for "remote exploits", as those do not require user interaction. you can find a curated list of exploits here: https://www.exploit-db.com/remote/
You can search the services on this page for something that matches the same service/version as your attack vector.
I apologize beforehand if this is a stupid or a silly question in any way. Let's just say that I stumbled upon an unprotected MongoDB server belonging to a big company. I tried using a client to connect to the server, without entering a username and password and it connected successfully. Now, I'm not sure if I have access to the data inside the databases, but I can see that there are a few databases on it, and I believe that it's possible for me to create and drop databases on it (haven't tried). How big of a security flaw does this constitute? Please note that I haven't tampered or messed around with anything, I'm just asking so I can discern if this is indeed a security flaw that I should report, or a false positive. Shouldn't such access be limited to database administrators?
I see where this is going, there may be several cases.
It might be a developmental server and data is fake.
It may be abandoned
They must be running some maintenance during which some lazy devs, open the ports and security.
Most production databases are sealed enough, since you call it a "BIG" company, most probably they must have done it.
What ever might be the case depending on the company you can even be slapped with criminal notices, not every companies take bug review by 3rd parties in proper way. If they have a proper bug bounty program though they may offer you a reward. Tread with caution.
We have a few servers that have different roles. For instance, we have production servers, and testing/staging servers. We have a few end users who forget to switch paths to production once things are tested and approved or use; They use the new paths for a bit, then revert back to using the testing/staging at some point for some reason that we can't understand other than stupidity. We still want to be able to get a glimpse into our staging environment after pushing a build into production, but we want to stop them from being able to still hit those servers/services.
We are now pondering some solutions to this problem. One being never give them the direct staging url. An idea would be to create a virtual directory or have a set of domain aliases that we could give them and then shut down while still allowing us access to these endpoints. We could restrict our main staging domain to the office ip range so they never have direct access and call it good.
Does this sound like a good solution? Is our process wrong, are there better routes?
I am interested in solutions for websites as well as web services where visuals can't be used effectively.
We've run into this at my work as well… quite recently in fact. One thing that I thought about other than the virtual directory was setting up specific ports for them to test on then either take the ports down or change them for our internal uses only.
Well without details in how your application is deployed it could be troublesome to give concrete examples. One wonderful solution is to get better users :P Perhaps a more possible solution however is to let your production boxes move a certain set of users(as decided in your code) to your test/staging systems. I.E. the User always connects to Production, but the production machines at connect/auth time, may decide these people are too cool for production let them run the test/staging code instead.
It's not a fullproof method of course, but it works for many many websites to let a certain set of users into different parts of their codebase.
I don't know how feasible this would be for you, but it's a possibility perhaps.
I find that users sometimes have difficulties with URLs, and don't like to have subtle changes like port number in the address.
The best approach I've found is to have the application tell the user what environment they are in.
For example, my teams have used absolutely positioned headers or footers, color coded for Dev/Staging environments that show the application version number with an alpha/beta tag, along with a message that says "Work done on this site will be lost, use Production (link) to keep your work." Typically we make the Dev area red, and the staging area yellow. We also like to put a link to the bug tracking system right in this area.
On production there is not usually a region like this. However, we do sometimes provide positive reinforcement by placing a green region, with the app version and a Production tag in it, and then fade the green region away after a few seconds. This helps keep the app front and center, but let's the user know they are in the right place.
I am interested in hearing how people do their Lisp webapp deployments and updates (especially updates) in production.
In Ruby many, myself included, use Capistrano for deployments. It provides some nice indirection and the ability to execute commands remotely and most importantly (in my mind) the ability to rollback to a working code base.
I know that the idea of a long running Lisp process being connected to via Swank through an SSH tunnel and modified in place is a popular idea that's knocked around, but I haven't drunk that Koolaid, mostly because of the issue of updating a stateful process (which seems like asking for trouble if something goes wrong - like unforeseen impedance mismatches between current state in memory and new object definitions that will soon be in memory).
Given that you can create nearly stateless (or completely) webapps using hunchentoot (or insert your favorite Lisp app server here), it seems like using something like Capistrano could be used for non-disruptive updates to Lisp code too if the Lisp process(es) hide behind nginx in its upstream channel and if you can correctly choreograph taking down the hunchentoot processes and spin them back up after an update to code, e.g., bring them back up all the while leaving at least one hunchentoot process running in the cluster at any given moment (CGI or mod_lisp could be used, but I am not particularly interested in that approach - though if you really like that approach, please at least say something about it, I want to learn). For instance, using Passenger (which is comparing oranges to apples since it spins up processes on demand), you touch tmp/restart.txt and the app server restarts this time with freshly updated code - no interruptions from the users perspective.
Well, this is a bit of a ramble, and actually I am about to try all this out, but I'd like to get some feedback on these ideas from others. Maybe you have a better idea.
Thanks
You can accomplish non-disruptive (zero downtime) deployments by writing capistrano scripts for an intelligent front-end/load balancer like HAProxy that pulls app servers out of rotation, restarts them with the newly deployed code, and puts them back in the mix.
By incrementally rolling your appservers while they are out of live rotation in production you can achieve smooth deployments.
This doesn't touch on having persistent app server loops with specific state, that seems scary for exactly the reasons you mentioned. REPLs are cool for debugging and tweaking, but your instincts to run the code on disk seem well founded.
The reason I ask is that Stack Overflow has been Slashdotted, and Redditted.
First, what kinds of effect does this have on the servers that power a website? Second, what can be done by system administrators to ensure that their sites remain up and running as best as possible?
Unfortunately, if you haven't planned for this before it happens, it's probably too late and your users will have a poor experience.
Scalability is your first immediate concern. You may start getting more hits per second than you were getting per month. Your first line of defense is good programming and design. Make sure you're not doing anything stupid like reloading data from a database multiple times per request instead of caching it. Before the spike happens, you need to do some fairly realistic load tests to see where the bottlenecks are.
For absurdly high traffic, consider the ability to switch some dynamic pages over to static pages.
Having a server architecture that can scale also helps. Shared hosts generally don't scale. A single dedicated machine generally doesn't scale. Using something like Amazon's EC2 to host can help, especially if you plan for a cluster of servers from the beginning (even if your cluster is a single computer).
You're next major concern is security. You're suddenly a much bigger target for the bad guys. Make sure you have a good security plan in place. This is something you should always have, but it become more important with high usage.
Firstly, ask if you really want to spend weeks and thousands of $ on planning for something that might not even happen, and if it does happen, lasts about 5 hours.
Easiest solution is to have a good way to switch to a page simply allowing a signup. People will sign up and you can email them when the storm has passed.
More elaborate solutions rely on being able to scale quickly. That's firstly a software issue (can you connect to a db on another server, can you do load balancing). Secondly, your hosting solution needs to support fast expansion. Amazon EC2 comes to mind, or maybe slicehost. With both services you can easily start new instances ("Let's move the database to a different server") and expand your instances ("Let's upgrade the db server to 4GB RAM").
If you keep all data in the db (including sessions), you can easily have multiple front-end servers. For the database I'd usually try a single server with the highest resources available, but only because I haven't worked with db replication and it used to be quite hard to do, at least with mysql. Things might have improved.
The app designer needs to think about scaling up (larger machines with more cores and higher performance) and/or scaling out (distributing workload across multiple systems). The IT guy needs to work out how to best support that. The network is what you look at first, because obviously everything rides on top of it. Starting at the border, that usually means network load balancers and redundant routers being served by multiple providers. You can also look at geographic caching services and apps such as cachefly.
You want to reduce your bottlenecks as much as possible. You also want to design the environment such that it can be scaled out as needed without much work. Do the design work up front and it'll mean less headaches when you do get dugg.
Some ideas (of what I used in the past and current projects):
For boosting performance (if needed) you can put a reverse-proxying, caching squid in front of your server. Of course that only works if you don't have session keys and if the pages are somewhat static (means: they change only once an hour or so) and not personalised.
With the squid you can boost a bloated and slow CMS like typo3, thus having the performance of static websites with the comfort of a CMS.
You can outsource large files to external services like Amazon S3, saving your server's bandwidth.
And if you are able to spend some (three-figures per month) bucks, you can as well use a Content Delivery Network. Whith that in place you automatically have scaling, high-availability and low latencys for your users. Of course, your pages must be cachable, so session keys and personalised pages are a no-no. If designed carefully and with CDNs in mind, you can at least cache SOME content, like pics and videos and static stuff.
The load goes up, as other answers have mentioned.
You'll also get an influx of new users/blog comments/votes from bored folks who are only really interested in vandalism. This is mostly a problem for blogs which allow completely anonymous commenting, where some dreadful stuff will be entered. The blog platform might have spam filters sufficient to block it, but manual intervention is frequently required to clean up remaining drivel.
Even a little barrier to entry, like requiring a user name or email address even if no verification is done, will dramatically reduce the volume of the vandalism.