Is there any tooling for bpfilter allowing to configure a firewall? - bpf

I'd like to know about bpfilter. I can't use netfilter (too slow), nftables (doesn't have my feature set).
Kernel says:
CONFIG_BPFILTER: │ │ This builds experimental bpfilter framework that is aiming to │ │ provide netfilter compatible functionality via BPF
Is there any:
tooling which allows to configure firewall rules using BPF instead of netfilter?
documentation allowing to jump easily into the subject?
manual?....
So far I only traced one LWN post explaining how cool bpffilter is, but for admin purposes its useless.
https://lwn.net/Articles/747551/
Is it too new and too sketchy to even care about?

As of early 2019, bpfilter is still under development and not usable yet. The basic skeleton is here and may even be activated in 4.18+ kernels, but does not do much for now as it is not complete. The code required for translating iptables rules to BPF bytecode, although submitted along the original RFC, has not made it to the kernel at this time.
Once it gets ready, there should not be any specific tooling required. Bpfilter will likely be enabled with something such as modprobe bpfilter, and then the whole idea is to transparently replace the back end, while leaving the front end untouched: so iptables should be the only tool required for handling the rules, without any particular option required. Additionally, the bpftool allows to inspect the eBPF programs (including iptables rules translated by bpfilter) loaded in the kernel.
You can check this if you want in the following video (disclaimer: by my company), which shows how we used bpfilter with a classic iptables rule (we had patched the kernel with the code from the RFC; and executing the bpfilter.ko in the console will not be necessary on the final version).
You can still attach BPF programs to the XDP hook (at the driver level), even without using bpfilter, to get much better performance than what netfilter offers. However, you would have to completely rewrite your rules as C programs, compile them into eBPF with clang, and load them with e.g. the ip tool (from iproute2). I don't know if this would match your “feature set”. Depending on how strong is your need, another drastic option could be to move your packet processing to user space and to reimplement your setup with the DPDK framework.

It looks like there is a tool that does this named bpf-iptables. Better still, it appears to use the normal iptables syntax. I have not yet used it myself, but I think I will try it the next time I have to set up iptables.

Related

Which exploit and which payload use?

Hi everyone and sorry for my bad English.
I'm learning penetration testing.
After reconnaissance and scanning of my target, I have enough information to pass to next phase.
Some info I have is open ports with related running services, names of the services, service's versions, operative system of the device, firewalls used, etc.)
I launched the mfs console.
I should find the correct exploit and payload, based on the information collected to gain access. I've read the Metasploit Unleashed guide on offensive-security. I've learned the Metasploit Fundamentals and the use of mfs console.
But I don't understand the way to start all of this. Assuming that my target has 20 ports open, I want test the vulnerability using an exploit payload that do not require user interaction. The possibilities of which exploit and payloads to use are now reduced, but are always too. Searching and testing all exploit and payloads for each ports isn't good! So, if i don't know the vulnerability of the target, how do I proceed?
I would like to be aware of what I do. and do not try without understanding.
Couple of things:
We have a stack exchange for security! Check it out at https://security.stackexchange.com/
For an answer: you want to look for "remote exploits", as those do not require user interaction. you can find a curated list of exploits here: https://www.exploit-db.com/remote/
You can search the services on this page for something that matches the same service/version as your attack vector.

Feedback desired: non-disruptive deployment strategies for production Lisp webapps

I am interested in hearing how people do their Lisp webapp deployments and updates (especially updates) in production.
In Ruby many, myself included, use Capistrano for deployments. It provides some nice indirection and the ability to execute commands remotely and most importantly (in my mind) the ability to rollback to a working code base.
I know that the idea of a long running Lisp process being connected to via Swank through an SSH tunnel and modified in place is a popular idea that's knocked around, but I haven't drunk that Koolaid, mostly because of the issue of updating a stateful process (which seems like asking for trouble if something goes wrong - like unforeseen impedance mismatches between current state in memory and new object definitions that will soon be in memory).
Given that you can create nearly stateless (or completely) webapps using hunchentoot (or insert your favorite Lisp app server here), it seems like using something like Capistrano could be used for non-disruptive updates to Lisp code too if the Lisp process(es) hide behind nginx in its upstream channel and if you can correctly choreograph taking down the hunchentoot processes and spin them back up after an update to code, e.g., bring them back up all the while leaving at least one hunchentoot process running in the cluster at any given moment (CGI or mod_lisp could be used, but I am not particularly interested in that approach - though if you really like that approach, please at least say something about it, I want to learn). For instance, using Passenger (which is comparing oranges to apples since it spins up processes on demand), you touch tmp/restart.txt and the app server restarts this time with freshly updated code - no interruptions from the users perspective.
Well, this is a bit of a ramble, and actually I am about to try all this out, but I'd like to get some feedback on these ideas from others. Maybe you have a better idea.
Thanks
You can accomplish non-disruptive (zero downtime) deployments by writing capistrano scripts for an intelligent front-end/load balancer like HAProxy that pulls app servers out of rotation, restarts them with the newly deployed code, and puts them back in the mix.
By incrementally rolling your appservers while they are out of live rotation in production you can achieve smooth deployments.
This doesn't touch on having persistent app server loops with specific state, that seems scary for exactly the reasons you mentioned. REPLs are cool for debugging and tweaking, but your instincts to run the code on disk seem well founded.

what are needed to make my own SNMP Agent and server?

Hii,
I want to make my own snmp server and agent.with my own MIB and OID's.
how can i do it??and where to start??
And if i want to use windows SNMP service and extend it and insert my own OID's into its MIB
then ,is it possible??.n if yes,how can i do this??
There is an excellent open-source implementation for the .NET framework called SharpSnmpLib. It can implement a normal SNMP server, and it allows you to load your own custom MIBS.
A couple of tips:
You can find existing MIB's at oidview or the Cisco Mib Browser
Avoid v3 and the RFC's that belong to it (in fact, I'd avoid the RFC's at all, they're confusing and cover many areas that were not adopted)
Test early and often with machines as close to the production setup as you can
If you ever start implementing any standardized protocol, the first step is to read the standards defining it. In case of SNMPv3. the relevant standards are
RFC:s
3411,
3412,
3414,
3414,
3515,
3416,
3417 and
3418
The good (and bad) thing about RFC's is that they usually very clearly state what you MUST, SHOULD, MUST NOT, SHOULD NOT and MAY do in your implemention.

How to limit the effect of client modifications to production systems

Our shop has developed a few WEB/SMS/DB solution for a dozen client installations. The applications have some real-time performance requirements, and are just good enough to function properly. The problem is that the clients (owners of the production servers) are using the same server/database for customizations that are causing problems with the performance of the applications that we created and deployed.
A few examples of clients' customizations:
Adding large tables with many text datatypes for the columns that get cast to other data types in the queries
No primary keys, indexes, or FK constraints
Use of external scripts that use count(*) from table where id = x, in a loop from the script, to determine how to construct more queries later in the same script. (no bulk actions that the planner can optimize or just do everything in a single pass)
All new code files on the server are created/owned by root, with 0777 permissions
The clients don't take suggestions/criticism well. If we just go ahead and try to port/change the scripts ourselves, the old code can come back, clobbering any changes that we make! Or with out limited knowledge of their use cases, we break functionality while trying to optimize their changes.
My question is this: how can we limit the resources to queries/applications other that what we create and deploy? Are there any pragmatic options in scenarios like this? We prided ourselves in having an OSS solution, but it seems that it's become a liability.
We use PG 8.3 running on a range on Linux Distos. The clients prefer php, but shell scripts, perl, python, and plpgsql are all used on the system in one form or another.
This problem started about two minutes after the first client was given full access to the first computer, and it hasn't gone away since. Anytime someone whose priorities are getting business oriented work done quickly they will be sloppy about it and screw up things for everyone. That's just how things work, because proper design and implementation are harder than cheap hacks. You're not going to solve this problem, all you can do is figure out how to make it easier for the client to work with you than against you. If you do it right, it will look like excellent service rather than nagging.
First off, the database side. There's now way to control query resources in PostgreSQL. The main difficulty is that tools like "nice" control CPU usage, but if the database doesn't fit in RAM it may very well be I/O usage that is killing you. See this developer message summarizing the issues here.
Now, if in fact it's CPU the clients are burning through, you can use two techniques to improve that situation:
Install a C function that changes the process priority (example 1, example 2) and make sure whenever they run something it gets called first (maybe put it into their psql config file, there are other ways).
Write a script that looks for postmaster processes spawned by their userid and renice them, make it run often in cron or as a daemon.
It sounds like your problem isn't the particular query processes they're running, but rather other modifications they're making to the larger structure. There's only one way to cope with that: you have to treat the client like they're an intruder and use the approaches of that portion of the computer security field to detect when they screw things up. Seriously! Install an intrusion detection system like Tripwire on the server (there are better tools, that's just the classic example), and have it alert you when they touch anything. New file that's 0777? Should jump right out of a proper IDS report.
On the database side, you can't directly detect the database being modified usefully. You should do a pg_dump of the schema every day into a file (pg_dumpall -g and pg_dump -s, then diff that against the last one you delivered and again alert you when it's changed. If you manage that this well, the contact with the client turns into "we noticed you changed on the server...what is it you're trying to accomplish with that?" which makes you look like you're really paying attention to them. That can turn into a sales opportunity, and they may stop fiddling with things as much just knowing you're going to catch it immediately.
The other thing you should start doing immediately is install as much version control software as you can on each client box. You should be able to login to each system, run the appropriate status/diff tool for the install, and see what's changed. Get that mailed to you regularly too. Again, this works best if combined with something that dumps the schema as a component to what it manages. Not enough people use serious version control approaches on the code that lives in the database.
That's the main set of technical approaches useful here. The rest of what you've got is a classic consulting client management problem that's far more of a people problem than a computer one. Cheer up, it could be worse--FSM help you if you give them ODBC access and they discover they can write their own queries in Access or something simple like that.

What code to write for a dongle attached system to provide better security?

I have developed a software piece (with C and Python) which I want to protect with dongle so that copying and reverse engineering becomes hard enough. My dongle device comes with an api which provides these:
Check dongle existence
Check proper dongle
Write into a memory location in dongle
Read from a memory location in dongle etc. (I think the rests aren't that good..)
What I can do in the source code so that it becomes harder to crack. Dongle provider suggested that, I should check proper dongle existence in a loop or after an event, or I should use the dongle memory in an efficient way. But how? I have no idea how crackers crack. Please shed some light. Thanks in advance.
P.S: Please don't suggest obfuscating. I have already done that.
First of all, realize that the dongle will only provide a little bit of an obstacle. Someone who knows what they're doing will just remove the call to the dongle and put in a 'true' for whatever result that was called. Everyone will tell you this. But there are roadblocks you can add!
I would find a key portion of your code, something that's difficult or hard to know, something that requires domain knowledge. Then put that knowledge onto the key. One example of this would be shader routines. Shader routines are text files that are sent to a graphics card to achieve particular effects; a very simple brightness/contrast filter would take less than 500 characters to implement, and you can store that in the user space on most dongles. Then you put that information on the key, and only use information from the key in order to show images. That way, if someone tries to just simply remove your dongle, all the images in your program will be blacked out. It would take someone either having a copy of your program, grabbing the text file from the key, and then modifying your program to include that text file, and then knowing that that particular file will be the 'right' way to display images. Particulars of implementation depend on your deployment platform. If you're running a program in WPF, for instance, you might be able to store a directx routine onto your key, and then load that routine from the key and apply the effect to all the images in your app. The cracker then has to be able to intercept that directx routine and apply it properly.
Another possibility is to use the key's random number generation routines to develop UIDs. As soon as someone removes the dongle functionality, all generated UIDs will be zeroed.
The best thing to do, though, is to put a domain specific function onto the dongle (such as the entire UID generation routine). Different manufacturers will have different capabilities in this regard.
How much of a roadblock will these clevernesses get you? Realistically, it depends on the popularity of your program. The more popular your program, the more likely someone will want to crack it, and will devote their time to doing so. In that scenario, you might have a few days if you're particularly good at dongle coding. If your program is not that popular (only a few hundred customers, say), then just the presence of a dongle could be deterrent enough without having to do anything clever.
Crackers will crack by sniffing the traffic between your app and the dongle and either disabling any code that tests for dongle presence or writing code to emulate the dongle (e.g. by replaying recorded traffic), whichever looks easier.
Obfuscation of the testing code, and many scattered pieces of code that perform tests in different ways, as well as separating spatially and temporally the effect of the test (disabling/degrading functionality, displaying a warning etc.) from the test itself make the former method harder.
Mutating the content of the dongle with each test based on some random nonce created each run or possibly even preserved between runs, so that naively recording and replaying the traffic does not work, will make the latter method harder.
However, with the system as described, it is still straightforward to emulate the dongle, so sooner or later someone will do it.
If you have the ability to execute code inside the dongle, you could move code that performs functions critical to your application there, which would mean that the crackers must either rederive the code or break the dongle's physical security - a much more expensive proposal (though still feasible; realise that there is no such thing as perfect security).
How to maximize protection with a simple dongle?
Use API together with Enveloper if an enveloper exists for your resulting file format. This is a very basic rule. Because our enveloper is already equipped with some anti-debugging and obfuscating methods to prevent common newbie hackers to give up hacking the program. Only using enveloper is also not recommended, because once a hacker can break the enveloper protection in other program, they can also break yours.
Call dongle APIs in a LOT of places in your application. For example when first start up, when opening a file, when a dialog box opens, and before processing any information. Also maybe do some random checking even when there's nothing done at all.
Use more than one function to protect a program. Do not just only use find function to look for a plugged dongle.
Use multiple dlls/libraries (if applicable) to call dongle functions. In case one dll is hacked, then there are still other parts of the software that uses the functions from another dll. For example, copying sdx.dll to print.dll, open.dll, and other names, then define the function calls from each dll with different names.
If you use a dll file to call dongle functions, bind it together with the executable. There are quite some programs capable of doing this; for example PEBundle. 3
I have got this article on PRLOG and found it quite useful on maximizing protection with a simple dongle. Maybe this link may help you
Maximizing Protection with a Simple Dongle for your Software
You can implement many check points in your application.
I don't know if you use HASP, but unfortunatelly, dongles can be emulated.
You may want to look into using Dinkey Dongles for your copy protection.
It seems a very secure system and the documentation gives you tips for improving your overall security using the system.
http://www.microcosm.co.uk/dongles.php
Ironically, the thing you want to discourage is not piracy by users, but theft by vendors. The internet has become such a lawless place that vendors can steal and resell your software at will. You have legal recourse in some cases, and not in others.
Nothing is fool-proof, as previously stated. Also, the more complex your security is, the more likely it is to cause headaches or problems for legitimate users.
I'd say the most secure application is always the one tied closest to the server. Sadly, then users worry about it being spyware.
If you make a lot of different calls to your dongle, then maybe the cracker will just emulate your dongle -- or find a single point of failure (quite common to change one or two bytes and all your calls are useless). It is a no-win situation.
As the author of PECompact, I always tell customers that they can not rely on anything to protect their software -- as it can and will be cracked if a dedicated cracker goes after it. The harder you make it, the more of a challenge (fun) it is to them.
I personally use very minimal protection techniques on my software, knowing these facts.
Use smartcard + encrypt/decrypt working files through secret function stored in card. Then software can be pirated, but it will not able to open properly encrypted working files.
I would say that if someone wants to crack your software protection, they will do so. When you say 'hard enough' - how should 'enough' be interpreted?
A dongle will perhaps prevent your average user from copying your software - so in that sense it is already 'enough'. But anyone who feels the need and is able to circumvent the dongle will likely be able to get past any other scheme that you engineer.