Which exploit and which payload use? - exploit

Hi everyone and sorry for my bad English.
I'm learning penetration testing.
After reconnaissance and scanning of my target, I have enough information to pass to next phase.
Some info I have is open ports with related running services, names of the services, service's versions, operative system of the device, firewalls used, etc.)
I launched the mfs console.
I should find the correct exploit and payload, based on the information collected to gain access. I've read the Metasploit Unleashed guide on offensive-security. I've learned the Metasploit Fundamentals and the use of mfs console.
But I don't understand the way to start all of this. Assuming that my target has 20 ports open, I want test the vulnerability using an exploit payload that do not require user interaction. The possibilities of which exploit and payloads to use are now reduced, but are always too. Searching and testing all exploit and payloads for each ports isn't good! So, if i don't know the vulnerability of the target, how do I proceed?
I would like to be aware of what I do. and do not try without understanding.

Couple of things:
We have a stack exchange for security! Check it out at https://security.stackexchange.com/
For an answer: you want to look for "remote exploits", as those do not require user interaction. you can find a curated list of exploits here: https://www.exploit-db.com/remote/
You can search the services on this page for something that matches the same service/version as your attack vector.

Related

Looking for nomenclature & architecture ideas having to do with server via point of user-data exchange

I have been struggling to find a good architecture, or even any nomenclature for what I'm trying to do here. I'm looking for nomenclature so I can have a starting point for research. And I want the same for architecture, but I'll take whatever anyone wants to help with.
What I'm trying to do & learn about
In a nutshell I need my clients to exchange pub keys, and other security data such as ACL ID's, name etc.
Current architectural attempts
I'm currently using my server as a via point, mainly because I can't see any other way of doing this securely and this method uses many layers of security. I also don't know of any other method of going client app to app securely.
A client creates group and sends pub key to server, opens a live query to receive other users data. Other user (with secrets passed to user) queries server for pub key, then sends own data to admin user via server. Admin then sends remainder of own data. I'm leaving out trivial security details but this is the gist of what I'm doing.
Issues
This is really just logical back and forth, but I honestly don't know what I'm doing. I don't even know if what I'm doing is right or the best way, I've also got a crazy infinity loop I'm trying to solve.
I'm looking for some terminology, description and/or architectural pointers, I'll take any input I can get.
Forget terminology, nomenclature and architecture.
Define the problem you are trying to solve in a simple sentence.
Break down the issues into smaller pieces (bite size).
You send A data to server
What happens to the A data
Any feedback or acknowledgement from the target host?
What sort of application is this? Web, Mobile, traditional client/server?
The most elegant solutions are usually the simplest ones.
Sit down and determine whether you have a problem to solve in the first place.

How to make uchiwa dashboard url be able to adjust threshold?

me again..
I had done all the sensu-uchiwa-graphite set up. And i get a new request,:(. Rather than go to change the threshold in check.json file on sensu server..any plugin at the UCHIWA that this adjustment will be shown in Uchiwa dashboard? I asked because in case that my application teams wanna change it by themselves without accessing to server.
I think sensu-admin in enterprise is available but we need to pay big money per year ;(...
Thanks in advance to help.
Sumana W.
This is fairly doable if you use a configuration management system like Chef/Ansible/Puppet - especially if you run standalone checks on the sensu-client.
This allows the clients to define their own thresholds, rather than changing the sensu servers themselves.
See https://sensuapp.org/docs/latest/reference/checks.html#standalone-checks
In this case, the definitions for the checks are sitting on the client servers and they have the choice of their thresholds or configurations. The client itself manages how often to run the check and sends the output back to the server, rather than the server requesting the checks. This helps quite a bit as far as scaling or multitenancy.
The other way to accomplish this, if you are tied to serverside checks, would be to use client attributes (https://sensuapp.org/docs/0.25/reference/checks.html#check-token-substitution)
For example, you can have a cpu check that says something like check-cpu.sh -w :::cpu_warn::: -c :::cpu_critical::: and these come from a cpu_warn and cpu_critical value from the client.json on the client server.
Source: We use sensu extensively in an enterprise environment across thousands of hosts and have been working through these same issues.

Using CouchDB as interface. Is it appropriate way?

our devices (microscopes with cameras) produce images and additional information to each image.
Now a middleware supplies wants to connect these devices to lab automation system. They have to acquire the data and we have to provide it. An astonishing thing for me was their interface suggestion - a very cryptical token separated format (ASTM E1394-97). Unfortunatelly, they even can't accomodate images in their protocol, and are aiming to get file-paths.
I thought it is not the up-to date approach. While lookink for alternatives, I saw CoachDB.
So, my idea was, our devices would import data including images in CoachDB and they could get the data. It seems even, that using mustache, we could produce the format they want (ascii-text) and placing URLs as image references instead of path's.
My question is, did someone applied CoachDB for such a use case already? It seems to be a little-bit misuse of CoachDB, as the main intention is interface not data storage. Another point disturbing me is, that the inventor of CoachDB went to other project Coachbase. Could it mean lack of support for CoachDB in the future?
Thank you very much for any insights and suggestions!
It's ok use-case and actually we're using CouchDB in such way - as proxing middleware between medical laboratory analyzers and LIS. Some of them publish images or pdf data on shared folders and we'd just loading them into related document as attachments.
More over you'd like to know, CouchDB is able to serve external processes (aka os_daemons) and take care about their lifespan: restarting if someone had terminated and starting right after you update config options through HTTP interface. This helps to setup ASTM client and server processes since this protocol is different from HTTP (which is native for CouchDB) which communicates with devices and creates documents as regular CouchDB clients. In same way you may setup daemons to monitor shared folders for specific files. And all this is just CouchDB with few "low bounded" plugins.

How should I benchmark a system to determine the overall best architecture choice?

This is a bit of an open ended question, but I'm looking for an open ended answer. I'm looking for a resource that can help explain how to benchmark different systems, but more importantly how to analyze the data and make intelligent choices based on the results.
In my specific case, I have a 4 server setup that includes mongo that serves as the backend for an iOS game. All servers are running Ubuntu 11.10. I've read numerous articles that make suggestions like "if CPU utilization is high, make this change." As a new-comer to backend architecture, I have no concept of what "high CPU utilization" is.
I am using Mongo's monitoring service (MMS), and I am gathering some information about it, but I don't know how to make choices or identify bottlenecks. Other servers serve requests from the game client to mongo and back, but I'm not quite sure how I should be benchmarking or logging important information from them. I'm also using Amazon's EC2 to host all of my instances, which also provides some information.
So, some questions:
What statistics are important to log on a backend setup? (CPU, RAM, etc)
What is a good way to monitor those statistics?
How do I analyze the statistics? (RAM usage is high/read requests are low, etc)
What tips should I know before trying to create a stress-test or benchmarking script for my architecture?
Again, if there is a resource that answers many of these questions, I don't need an explanation here, I was just unable to find one on my own.
If more details regarding my setup are helpful, I can provide those as well.
Thanks!
I like to think of performance testing as a mini-project that is undertaken because there is a real-world need. Start with the problem to be solved: is the concern that users will have a poor gaming experience if the response time is too slow? Or is the concern that too much money will be spent on unnecessary server hardware?
In short, what is driving the need for the performance testing? This exercise is sometimes called "establishing the problem to be solved." It is about the goal to be achieved-- because if there is not goal, why go through all the work of testing the performance? Establishing the problem to be solved will eventually drive what to measure and how to measure it.
After the problem is established, a next set is to write down what questions have to be answered to know when the goal is met. For example, if the goal is to ensure the response times are low enough to provide a good gaming experience, some questions that come to mind are:
What is the maximum response time before the gaming experience becomes unacceptably bad?
What is the maximum response time that is indistinguishable from zero? That is, if 200 ms response time feels the same to a user as a 1 ms response time, then the lower bound for response time is 200 ms.
What client hardware must be considered? For example, if the game only runs on iOS 5 devices, then testing an original iPhone is not necessary because the original iPhone cannot run iOS 5.
These are just a few question I came up with as examples. A full, thoughtful list might look a lot different.
After writing down the questions, the next step is decide what metrics will provide answers to the questions. You have probably comes across a lot metrics already: response time, transaction per second, RAM usage, CPU utilization, and so on.
After choosing some appropriate metrics, write some test scenarios. These are the plain English descriptions of the tests. For example, a test scenario might involve simulating a certain number of games simultaneously with specific devices or specific versions of iOS for a particular combination of game settings on a particular level of the game.
Once the scenarios are written, consider writing the test scripts for whatever tool is simulating the server work loads. Then run the scripts to establish a baseline for the selected metrics.
After a baseline is established, change parameters and chart the results. For example, if one of the selected metrics is CPU utilization versus the number of of TCP packets entering the server second, make a graph to find out how utilization changes as packets/second goes from 0 to 10,000.
In general, observe what happens to performance as the independent variables of the experiment are adjusted. Use this hard data to answer the questions created earlier in the process.
I did a Google search on "software performance testing methodology" and found a couple of good links:
Check out this white paper Performance Testing Methodology by Johann du Plessis
Have a look at the Methodology section of this Wikipedia article.

What code to write for a dongle attached system to provide better security?

I have developed a software piece (with C and Python) which I want to protect with dongle so that copying and reverse engineering becomes hard enough. My dongle device comes with an api which provides these:
Check dongle existence
Check proper dongle
Write into a memory location in dongle
Read from a memory location in dongle etc. (I think the rests aren't that good..)
What I can do in the source code so that it becomes harder to crack. Dongle provider suggested that, I should check proper dongle existence in a loop or after an event, or I should use the dongle memory in an efficient way. But how? I have no idea how crackers crack. Please shed some light. Thanks in advance.
P.S: Please don't suggest obfuscating. I have already done that.
First of all, realize that the dongle will only provide a little bit of an obstacle. Someone who knows what they're doing will just remove the call to the dongle and put in a 'true' for whatever result that was called. Everyone will tell you this. But there are roadblocks you can add!
I would find a key portion of your code, something that's difficult or hard to know, something that requires domain knowledge. Then put that knowledge onto the key. One example of this would be shader routines. Shader routines are text files that are sent to a graphics card to achieve particular effects; a very simple brightness/contrast filter would take less than 500 characters to implement, and you can store that in the user space on most dongles. Then you put that information on the key, and only use information from the key in order to show images. That way, if someone tries to just simply remove your dongle, all the images in your program will be blacked out. It would take someone either having a copy of your program, grabbing the text file from the key, and then modifying your program to include that text file, and then knowing that that particular file will be the 'right' way to display images. Particulars of implementation depend on your deployment platform. If you're running a program in WPF, for instance, you might be able to store a directx routine onto your key, and then load that routine from the key and apply the effect to all the images in your app. The cracker then has to be able to intercept that directx routine and apply it properly.
Another possibility is to use the key's random number generation routines to develop UIDs. As soon as someone removes the dongle functionality, all generated UIDs will be zeroed.
The best thing to do, though, is to put a domain specific function onto the dongle (such as the entire UID generation routine). Different manufacturers will have different capabilities in this regard.
How much of a roadblock will these clevernesses get you? Realistically, it depends on the popularity of your program. The more popular your program, the more likely someone will want to crack it, and will devote their time to doing so. In that scenario, you might have a few days if you're particularly good at dongle coding. If your program is not that popular (only a few hundred customers, say), then just the presence of a dongle could be deterrent enough without having to do anything clever.
Crackers will crack by sniffing the traffic between your app and the dongle and either disabling any code that tests for dongle presence or writing code to emulate the dongle (e.g. by replaying recorded traffic), whichever looks easier.
Obfuscation of the testing code, and many scattered pieces of code that perform tests in different ways, as well as separating spatially and temporally the effect of the test (disabling/degrading functionality, displaying a warning etc.) from the test itself make the former method harder.
Mutating the content of the dongle with each test based on some random nonce created each run or possibly even preserved between runs, so that naively recording and replaying the traffic does not work, will make the latter method harder.
However, with the system as described, it is still straightforward to emulate the dongle, so sooner or later someone will do it.
If you have the ability to execute code inside the dongle, you could move code that performs functions critical to your application there, which would mean that the crackers must either rederive the code or break the dongle's physical security - a much more expensive proposal (though still feasible; realise that there is no such thing as perfect security).
How to maximize protection with a simple dongle?
Use API together with Enveloper if an enveloper exists for your resulting file format. This is a very basic rule. Because our enveloper is already equipped with some anti-debugging and obfuscating methods to prevent common newbie hackers to give up hacking the program. Only using enveloper is also not recommended, because once a hacker can break the enveloper protection in other program, they can also break yours.
Call dongle APIs in a LOT of places in your application. For example when first start up, when opening a file, when a dialog box opens, and before processing any information. Also maybe do some random checking even when there's nothing done at all.
Use more than one function to protect a program. Do not just only use find function to look for a plugged dongle.
Use multiple dlls/libraries (if applicable) to call dongle functions. In case one dll is hacked, then there are still other parts of the software that uses the functions from another dll. For example, copying sdx.dll to print.dll, open.dll, and other names, then define the function calls from each dll with different names.
If you use a dll file to call dongle functions, bind it together with the executable. There are quite some programs capable of doing this; for example PEBundle. 3
I have got this article on PRLOG and found it quite useful on maximizing protection with a simple dongle. Maybe this link may help you
Maximizing Protection with a Simple Dongle for your Software
You can implement many check points in your application.
I don't know if you use HASP, but unfortunatelly, dongles can be emulated.
You may want to look into using Dinkey Dongles for your copy protection.
It seems a very secure system and the documentation gives you tips for improving your overall security using the system.
http://www.microcosm.co.uk/dongles.php
Ironically, the thing you want to discourage is not piracy by users, but theft by vendors. The internet has become such a lawless place that vendors can steal and resell your software at will. You have legal recourse in some cases, and not in others.
Nothing is fool-proof, as previously stated. Also, the more complex your security is, the more likely it is to cause headaches or problems for legitimate users.
I'd say the most secure application is always the one tied closest to the server. Sadly, then users worry about it being spyware.
If you make a lot of different calls to your dongle, then maybe the cracker will just emulate your dongle -- or find a single point of failure (quite common to change one or two bytes and all your calls are useless). It is a no-win situation.
As the author of PECompact, I always tell customers that they can not rely on anything to protect their software -- as it can and will be cracked if a dedicated cracker goes after it. The harder you make it, the more of a challenge (fun) it is to them.
I personally use very minimal protection techniques on my software, knowing these facts.
Use smartcard + encrypt/decrypt working files through secret function stored in card. Then software can be pirated, but it will not able to open properly encrypted working files.
I would say that if someone wants to crack your software protection, they will do so. When you say 'hard enough' - how should 'enough' be interpreted?
A dongle will perhaps prevent your average user from copying your software - so in that sense it is already 'enough'. But anyone who feels the need and is able to circumvent the dongle will likely be able to get past any other scheme that you engineer.