How can I create RRD files in Perl? - perl

I have a separate application printing logs in every 10 seconds. I need to create RRD files from the log files. I need some Perl code to read the log files and create the RRD only without the graphs.
I have also gone through the available Perl module in CPAN, i.e. RRD::Simple and RRD::Simple::Examples, but I still need help.

I'd start with RRD::Simple. There's some example code in the documentation. Since you don't need to create a graph, simply skip that section of the example.
Some of the examples read a single sample of data, call the update function once, and then exit. Those scripts are meant to be run periodically to collect data in real time. The example that's probably more pertinent to your needs is ApacheAccessLogActivity.pl, which reads an Apache log file, parses each line with a regular expression, does a bit of analysis to figure out what it just read, and then calls update, all in a loop. Note that that example uses the standalone functions rather than the object-oriented versions.
If you've already read the documentation for that module and need more information about how to use it, or if you've tried it and found that it has shortcomings that prevent you from using it, then please be more specific about what you need to do.
RRDTool::OO also looks promising.

I'd recommend RRDTool::OO.
Exerpt from the perldoc:
$rrd->create( ... )
Creates a new round robin database (RRD). A RRD consists of one or
more data sources and one or more archives:
$rrd->create(
step => 60,
data_source => { name => "mydatasource",
type => "GAUGE" },
archive => { rows => 5 });

Related

Discord.py mod log setup

I was wondering how one could make a command that sets up a mod log for a certain server, or start logging in one that already exists. This will probably require a database or JSON, anything is fine, as long as I can get it to work. Any help would be appreciated.
The way I would do this is with a .txt file that would contain all of the logs. I would add a line every time an event within the guild occurred, you could configure this to whatever events you wanted.
import datetime
#client.event
async def on_member_join(member):
with open("logs.txt", "a") as logsFile:
logsFile.write("\n[{}] {} just joined the server".format(datetime.datetime.now(),
member.name))
If you are currently using the logging module for discord.py or any other libraries, you can also use it for logging your own messages, either using the same or a separate logger.

minifilter driver | tracking changes in files

What I'm trying to achieve is to intercept every write to a file and track the changes within the file. I want to track how much different the file content before and after the write.
So far in my minifilter driver I registered to IRP_MJ_WRITE callbacks and can now intercept writes to file. However I'm still not sure how can I obtain the content of the file before [preoperation] and the content after [postoperation].
The parameters that I have within the callback functions are:
PCFLT_RELATED_OBJECTS, PFLT_CALLBACK_DATA and I could not find anything related to the content of the file itself within these.
These are the operations that could change data in a file:
Modifying the file: IRP_MJ_WRITE, IRP_MJ_SET_INFORMATION ( specifically the FileEndOfFileInformation and FileValidDataLengthInformation information classes), IRP_MJ_FILE_SYSTEM_CONTROL ( specifically FSCTL_OFFLOAD_WRITE, FSCTL_WRITE_RAW_ENCRYPTED and FSCTL_SET_ZERO_DATA fsctl codes).
As for the content of the file itself that you just need to read it yourself.
If you mean the buffers as they are being written for example, check this out to find out more about the parameters of IRP_MJ_WRITE in the callback data. Esentially the buffer is at Data->Iopb->Parameters.Write.WriteBuffer/MdlAddress
Make sure you handle that memory correctly otherwise it will result a BSODs.
Good luck.

Issue with a topjson object in a Meteor app built with coffeescript

Apologies for the lack of precision in the question, but I'm not completely sure which of possibly many things I'm doing wrong here.
I'm relatively new to Coffeescript and also geo applications in general, but here goes:
I've got a working (simple) Meteor (.7.0.1) application utilizing coffeescript in both client and server. The issue I'm having occurs when attempting to utilize TopoJSON encoded files to create a layer of US congressional districts. (the purpose of the app is to help highlight voter suppression in the US)
So, a few things: Typically in a non-Meteor app, I would just load the topoJSON file like so:
$.getJSON('./data/us-congress-113.json', function (data) {
var congress_geojson = topojson.feature(data, data.objects.districts);
congress_layer.addData(congress_geojson);
});
Now of course this won't work in Meteor because its not asynchronous.
One of the things that was recommended here on SO was to not worry about reading the file, and to instead change the json file to .js, and then set the contents (which are of course just an object) equal to a variable.
Here's what I did:
First, I changed the .json file to a .js file in the server directory, and added the "congress =" to the beginning of the file. It's a huge file so forgive me for omitting the whole object.
congress = {"type":"Topology",
"objects":
{"districts":
{"type":"GeometryCollection","geometries":[{"type":"Polygon"
Now here's where everything starts to give me issues:
In the server.coffee, I've created a variable like so to reference the congress object:
#congress_geojson = topojson.feature(congress, congress.objects.districts)
Notice how I'm putting the # symbol there? I've been told this allows a variable in Coffeescript to be globally scoped? I tried to also use a Meteor feature called "share" where I declare the variable as "share.congress_geojson". That led to the same issues which I will describe below.
Now in the client.coffee file, I'm trying to call this variable to load into a Leaflet map.
congress_layer = L.geoJson(null,
style:
color: "#DE0404"
weight: 2
opacity: 0.4
fillOpacity: 0.1
)
congress_layer.addData(#congress_geojson)
This isn't working, and specifically (despite attempts to find other ways, the errors I'm getting in the console are:
Exception from Deps afterFlush function: TypeError: Cannot read property 'features' of undefined
at o.GeoJSON.o.FeatureGroup.extend.addData (http://localhost:3000/packages/leaflet.js?ad7b569067d1f68c7403ea1c89a172b4cfd68d85:39:16471)
at Object.Template.map.rendered (http://localhost:3000/client/client.coffee.js?37b1cdc5945f3407f2726a5719e1459f44d1db2d:213:18)
I have no doubt that I'm missing something stupidly obvious here. Any suggestions or tips for what I'm doing completely wrong would be appreciated. Is it a case where an object globally declared in a .js file isn't available to code in a .coffee file? Maybe I'm doing something wrong on the Meteor side?
Thanks!
Edit:
So I was able to get things working by putting the .js file containing the congress object in a root /lib folder, causing the object to load first, and then calling the congress object from the client. However, I'm still wanting to know how I could simply share this object from the server? What is the "Meteor way" here?
If you are looking for the Meteor way to order the loading of files, use the Meteor.startup function and put the initialization code there. That function is the $.ready of the Meteor world, i.e., it will execute only after all your files have been successfully loaded on the client.
So in your case:
Meter.startup ->
congress_layer.addData(#congress_geojson)

Open .mat file in matlab or unix - new user

I am going through someone's data analysis files (created in an older version of matlab) and trying to find out what a particular .mat file is that was used in a matlab script.
I am trying to load a .mat file in matlab. I want to see what is in it.
When I type...
load ('file.mat')
the file loads and I see two variables appear in the workspace. jobhelp and jobs.
When I try to open jobs by typing the following in the matlab command window...
jobs
the response is..
jobs =
[1x1 struct]
Does this mean that there is only a 1 x 1 structure in the .mat file? If so, how in the world do I see what it is? I'm even happy to load it in unix, but I don't know how to do that either. Any help would be greatly appreciated as I have a few files like this that I can't get any information from.
Again, a new user, so please make it simple.
Thanks
It means that jobs is a cell array {} and within this cell array is a structure defined
To see the structure and its contents type jobs{1}
I think you are dealing with a SPM5 Batch-File. This variable is an image of the tree-like structure you can see in the Batch-Editor of SPM. Your job consists of one subitem (stats) which could have various subsubitems (like fMRI model specification, model estimation and so on).
To access this structure on the command line just proceed like Nick said:
Each level is a separate cell array that you can access with {#} after the name of the level. Example: jobs{1} shows you that there is a subitem named stats.
Subitems in structs are accessed with a dot. Example: jobes{1}.stats{1} shows you the subitems of the stats-entry.
Notice that there could be more than one entry on each layer: A stats module could (and probably would) contain various subitems. You can access them via jobs{1}.stat{2}, jobs{1}.stats{3} and so on.
The last layer would be the interesting one for you: The structures in here is an image of the options you can choose in the batch-editor.

What is the best file parsing solution for converting files?

I am looking for the best solution for custom file parsing for our enterprise import routines. I want to basically change one file format into a standard file format and have one routine that imports that data into the database. I need to be able to create custom scripts for each client since its difficult to get the customer to comply with a standard or template format. I have looked at PowerShell and Iron Python to do this so far but I am not sure this is the route I want to go. I have also looked at some tools such as Talend which is a drag and drop style tool which may or may not give me what I want as far as flexibility. We are a .NET shop and have created custom code to do this in the past but I need something that is quicker to create then coding custom parsing functions each time we get a new file format in.
Depending on the complexity and variability of your work, you should consider an ETL tool like SSIS (SQL Server Integration Services).
Python is wonderful for this kind of thing. That's why we use. Each new customer transfer is a new adventure and Python gives us the flexibility to respond quickly.
Edit. All python scripts that read files are "custom file parsers". Without an actual example, it's not sensible to provide a detailed example.
with open( "some file", "r" ) as source:
for line in source:
process( line )
That's about all there is to a "custom file parser". If you're parsing .csv or .xml files, then Python has modules for that. If you're parsing fixed-format files, you'd use string slicing operations. If you're parsing other files (X12? JSON? YAML?) you'll need appropriate parsers.
Tab-Delim.
from collections import namedtuple
RecordLayout = namedtuple('RecordLayout',['field1','field2','field3',...])
def process( aLine ):
record = RecordLayout( aLine.split('\t') )
...
Fixed Layout.
from collections import namedtuple
RecordLayout = namedtuple('RecordLayout',['field1','field2','field3',...])
def process( aLine ):
fields = ( aLine[:10], aLine[10:20], aLine[20:30], ... )
record = RecordLayout( fields )
...