what are needed to make my own SNMP Agent and server? - sockets

Hii,
I want to make my own snmp server and agent.with my own MIB and OID's.
how can i do it??and where to start??
And if i want to use windows SNMP service and extend it and insert my own OID's into its MIB
then ,is it possible??.n if yes,how can i do this??

There is an excellent open-source implementation for the .NET framework called SharpSnmpLib. It can implement a normal SNMP server, and it allows you to load your own custom MIBS.
A couple of tips:
You can find existing MIB's at oidview or the Cisco Mib Browser
Avoid v3 and the RFC's that belong to it (in fact, I'd avoid the RFC's at all, they're confusing and cover many areas that were not adopted)
Test early and often with machines as close to the production setup as you can

If you ever start implementing any standardized protocol, the first step is to read the standards defining it. In case of SNMPv3. the relevant standards are
RFC:s
3411,
3412,
3414,
3414,
3515,
3416,
3417 and
3418
The good (and bad) thing about RFC's is that they usually very clearly state what you MUST, SHOULD, MUST NOT, SHOULD NOT and MAY do in your implemention.

Related

Which exploit and which payload use?

Hi everyone and sorry for my bad English.
I'm learning penetration testing.
After reconnaissance and scanning of my target, I have enough information to pass to next phase.
Some info I have is open ports with related running services, names of the services, service's versions, operative system of the device, firewalls used, etc.)
I launched the mfs console.
I should find the correct exploit and payload, based on the information collected to gain access. I've read the Metasploit Unleashed guide on offensive-security. I've learned the Metasploit Fundamentals and the use of mfs console.
But I don't understand the way to start all of this. Assuming that my target has 20 ports open, I want test the vulnerability using an exploit payload that do not require user interaction. The possibilities of which exploit and payloads to use are now reduced, but are always too. Searching and testing all exploit and payloads for each ports isn't good! So, if i don't know the vulnerability of the target, how do I proceed?
I would like to be aware of what I do. and do not try without understanding.
Couple of things:
We have a stack exchange for security! Check it out at https://security.stackexchange.com/
For an answer: you want to look for "remote exploits", as those do not require user interaction. you can find a curated list of exploits here: https://www.exploit-db.com/remote/
You can search the services on this page for something that matches the same service/version as your attack vector.

How do we develop an application on the mainframe to access DB2/LUW without DB2/z?

We have developed an application which runs on the mainframe (z/OS), and it uses CAF, the Call Attach Facility, to talk to DB2/z for storing its data.
Those customers which already have DB2/z (and hence have to pay for it regardless) are not concerned, but there are others who want to use our application without incurring the expense of the database as well.
They have expressed a desire to have our product not use DB2/z, due to the expense. Under z/OS, the licence fees for DB2 are rather high and our application doesn't really need the insane levels of reliability that it provides.
So what they'd like us to do is to run DB2 under either zLinux (SLES/RHEL), or DB2/LUW on a machine totally separate from the mainframe. Or even, though this will probably be harder, in a non-IBM database.
We're looking for a hopefully-minimal-change solution to our code in achieving this. DB2 has all its federated stuff which will allow a program using DB2/z to seamlessly access data on an instance running elsewhere, but this still requires DB2/z and hence won't result in a cost reduction.
What would be the easiest way to shift all the data off the mainframe and allow us to remove the DB2/z dependency completely from our application?
Building on #NealB's answer, another way to create the layers would be to have no SQL in your application layer, but to call subroutines to accomplish your I/O. You indicate you would be willing to create custom builds, so you could create a set of routines for commonly-asked-for persistence layers.
Call the "database connect" module, which for DB2 on z/OS would do the CAF calls, for DB2 on z/Linux would (say) establish an SSL connection to the DBMS. Maintain a structure in memory with a union of pointers to the necessary data structures to communicate with your DBMS of choice.
FWIW I've seen vendor code that does this, allowing the business logic to be independent of the DBMS implementation. Some shops use VSAM, others DB2, other IMS. The data model is messy, but, sometimes them's the breaks.
This isn't an answer, just a couple of ideas and observations.
One approach I can think of would be to tier your application into an I/O layer and an
application layer. The application would run on Z/os and the I/O layer would run on
whatever machine hosts the database. All data access would then be via remote procedure calls
over TCP/IP or UDP. This would be a lot of work to set up and configure. Worse yet it may only be
appropriate for read-only type operations because managing transaction ACID (Atomicity, Consistency, Isolation, Durability)
properties becomes a real nightmare in the face of update operations.
As cschneid pointed out, you could try "rolling your own" database management system using
open source; but that too would probably lead to more problems than it solves.
I think your observation about "pushing a big rock uphill" sums it up.

Using CouchDB as interface. Is it appropriate way?

our devices (microscopes with cameras) produce images and additional information to each image.
Now a middleware supplies wants to connect these devices to lab automation system. They have to acquire the data and we have to provide it. An astonishing thing for me was their interface suggestion - a very cryptical token separated format (ASTM E1394-97). Unfortunatelly, they even can't accomodate images in their protocol, and are aiming to get file-paths.
I thought it is not the up-to date approach. While lookink for alternatives, I saw CoachDB.
So, my idea was, our devices would import data including images in CoachDB and they could get the data. It seems even, that using mustache, we could produce the format they want (ascii-text) and placing URLs as image references instead of path's.
My question is, did someone applied CoachDB for such a use case already? It seems to be a little-bit misuse of CoachDB, as the main intention is interface not data storage. Another point disturbing me is, that the inventor of CoachDB went to other project Coachbase. Could it mean lack of support for CoachDB in the future?
Thank you very much for any insights and suggestions!
It's ok use-case and actually we're using CouchDB in such way - as proxing middleware between medical laboratory analyzers and LIS. Some of them publish images or pdf data on shared folders and we'd just loading them into related document as attachments.
More over you'd like to know, CouchDB is able to serve external processes (aka os_daemons) and take care about their lifespan: restarting if someone had terminated and starting right after you update config options through HTTP interface. This helps to setup ASTM client and server processes since this protocol is different from HTTP (which is native for CouchDB) which communicates with devices and creates documents as regular CouchDB clients. In same way you may setup daemons to monitor shared folders for specific files. And all this is just CouchDB with few "low bounded" plugins.

Hosting needs for a turn based iPhone game

So i've been spending some time developing an iPhone app - it's a simple little game and is similar to "Words with friends" in that it:
1) is turn based
2) contacts a web service API to store the "game data" (turns, user info, etc).
In my case, i'm using .NET MVC and a SQL Server backend to develop the API. We're not talking an immense amount of data here - small images will be transferred back and forth and stored in the database though. A typical request would see a few records added or changed in the database.
I mostly don't have much concept of when things would start to get overloaded - my concern, of course, is that this thing takes off (obviously wishful thinking) and then my server gets so overwhelmed that it dies. That being said, I don't want to spend time and money on Windows Azure or something when my hosting needs may be totally trivial.
So, my somewhat general question is this - does anyone have any firsthand knowledge of when things start to get overloaded? Like...just a general estimate of number of requests or something for a time period, assuming each request hits the .NET app which then hits the database a reasonable number of times.
Even some anecdotal "My similar API gets hit 10,000 times a minute and is hosted on crappy shared hosting" would be awesome just so I get some concept.
Thanks in advance!
It is very hard to give a good answer to your question as it greatly depends on what precisely the backend does for each request. Even "trivial" services as you describe can easily differ greatly in performance depending on the actual implementation.
As a rough guideline based on our projects, if your API is a single HTTP request (no HTTPS), hitting a bare-bones controller, being translated into a single, simple SQL statement ("SELECT * FROM foo WHERE bar") returning less than 100 Bytes of data, you can serve about 750 requests per minute on a 32 Bit, 1 Gigahertz box with 512MB ram.
But this number will be reduced to 75 or less if any of those factors go up.
That said:
This is the poster-child case for cloud computing.
If Azure is too much hassle / cost for you (which is not an uncommon complaint from independent developers) you have three main alternatives:
1) Ditch .NET in favor of Python and host within Google App Engine
Python is quick to learn and GAE scales beautifully without you ever needing to care. Best of all, there is a huge free-tier so unless your app really takes off, you won't pay a cent. As you are developing for iOS, I assume you aren't hell bent on .NET to begin with.
2) If you need .NET, go with AWS
They also have a rather large free-tier. Either throw everything on top of a Mono stack (completely free for the 1st year) or shell out the money for a Windows EC2 instance. This takes more planning than GAE but with a little work you can make it scale to wherever your app goes.
If cost is a concern, use the same AWS cluster to host several of your Apps' APIs.
3) Go with OpenFeint's Multiplayer API
OpenFeint supports basic multiplayer games. If you can implement the needed functionality using it, then this might be the best solution. If not, look into (1) and (2).
How long is a piece of string? It all depends with the hosting and connection speeds. .Net is more than capable of handling LARGE amounts of requests. The simplest solution is to monitor the server (or if you cannot, monitor your web services performance) and get better hosting if your app starts to suffer.

How can I communicate across Perl CGI scripts?

I am searching for efficient ways of communication across two Perl
scripts. I have two scripts; Script 1 generates some data. I want my
Script 2 to be able to access that information.
The easiest/dumbest
way is to write the data generated by Script 1 as a file and read it
later using Script 2. Is there any other way than this? Can I store
the data in memory and make it available to Script 2 (of course with
support from my Linux )? Meaning malloc some data by Script 1 and make
Script 2 able to access it.
There is no guarantee that Script 2 will be run after Script 1. So
there should be some way to free that memory using a watchdog timer.
Let me reveal some more context. I am running these scripts on a web-server using CGI-Perl. So at the click of a button Script 1 is run and it generates a html web-page. Now the user can add some inputs to to this generated web-page and click a button on this new page.Now Script 2 should be able to read the data on new web-page.I can post the data back to web-server again but a more efficient way is to keep a copy of generated page in server also and make it available to script 2. Now, I would like to avoid writing down the generated page as a file. I was thinking of storing it in memory
This depends somewhat on your usage... one large set of data? Many small messages? Di you canre at all about data persistance? Is it TOTALLY asynchronous?
Some of the options are:
For any but the most high performace web sites, the best approach is to write our the HTML pages to files!. Unless the intrer-process communication is benchmarked to be the botttleneck in performance, don't both with any of the non-file solutions (shared memory, cache, intermediate server).
Specifically for two CGI scripts on the same server, if you run them under mod_perl or some other arrangement which shares Perl interpreter between 2 CGI processes, you can develop a package to serve as cache, which -with its package level variable - would be preserved in memory by mod_perl as long as mod_perl is running and can thus be used by a writer CGI process and a reader CGI process to communicate. Of course the usual synchronization/deadlock and persistance issues associated with reader/writer need to be considered.
As an alternative, use Apache::Session sessions to store inter-session data.
As you noted, shared memory. For example use IPC::ShareLite, IPC::Cache, or this solution from perlmonks.
Also, please check Chapter 16 Recipe 12 "Sharing Variables in Different Processes" from O'Reilly's "Perl Cookbook" (no link since non-pirated versions aren't online anywhere I know of)
Use a permanent medium. A file is one option. A database is another.
For async, use an intermediate messaging system (MQ, Tibco, something more lightweight). Probably a bit of an overkill in this scenario but a valid option to be aware of. This one is likely to be pretty stablem solid and optmized, but possibly not free and less flexible/tailored.
Or roll your own simple messaging system server - it's not THAT complicated for very simple one you seem to need.
Listen on one port for requests from first process to store data, listen on another port for requests from consumer process to send you that data, store the data in a storage area in memory and purge it when it expires using alarms or separate watcher child process).
You've tagged your question as "cgi". Are they both CGI programs? In that case, they can just talk to each other by making HTTP requests.
However, you'll have to tell a lot more about why you are trying to do this and what you need to accomplish for us to help you. It's certainly easy for Perl programs to communicate with each other in some fashion, but that doesn't mean it's the right answer for you.
When you have complex requirements for interaction among CGI programs, you probably want to move to a web framework that handles a lot of those details for you. Catalyst might be where'd you want to start. There's even a book for it.