I am studying character device driver programming. I had some doubts and hope to clarify them here:-
(a) "A device file is associated with a major number and a minor number. Also in our driver module, we define a cdev object with its fops field defined according to our functions and same major and minor numbers as our device file."
1. I want to know what exactly happens when a function is called on the device file.
Here is what I think. Suppose, I make a file called mydevfile using mknod(). Now
when I call open(mydevfile, O_RDWR), the kernel is searched for a cdev object with
same minor and major number. When found, the cdev 's fops is searched for function
for open() (say dev_open()). It is written that the dev_open() should have first
argument inode* and second argument file*. My question is how are these parameters
passed to the dev_open() function?
2. I learnt that inode is associated with a file on disk. Which file is it associated
with here? Also inode has a pointer to corresponding cdev. Now if we have already
got the cdev by searching major and minor number from mydevfile, why do we need
inode? W
3. What does the file*(i.e. the second argument) point to in this case?
You are free to explain this in your preferred way, but I would prefer if you could explain it using an example. Thanks!
I am a newcomer to character drivers. This is just a small summary of what I can make out for your questions. Suggestions and edits are welcomed.
These are the main structures you need to know for writing the character drivers:
1) File operation structure: each field in this structure points to the function in the driver that implements eg open, read,write,ioctl. Each open file is associated with some functions by including a field called f_op which will point to the file operation structure.
2) File structure: it represents an open file. it is not specific to drivers and each open file will have a file structure in the kernel space. It is created by the kernel on open & passed to any function that operates on the file until the last close.
struct fileoperations *f_op;
3) Inode structure: used by the kernel to internally represent the files. Only two parameters are important here viz a. struct cdev *i_cdev and b.dev_t i_rdev
a. struct cdev *i_cdev: kernel's internal structure to represent the char devices.
b. dev_t i_rdev: contains the actual device numbers.
this is what I intrepret:
Inode is read from disk & inode object is intialized.
ext2_readinode()----> init_special_inode()----> this will intialize the i_rdev field of inode object to minor and major numbers of device files.
Related
SETUP Win7 64b, R2015b, 16 GB of RAM, CPU i7-2700
The table() is a fundamental Matlab class which is also sealed, hence I cannot subclass it.
I want to fix some methods of this class and add new ones.
For instance, table.disp() is fundamentally broken, e.g. try NOT disp(table(rand(1e7,1))), or forget the ; in the command window. The variable takes only 76 MB in RAM but the display is unbuffered and it will stall your system!
Can I override methods like table.disp() without writing into matlabroot\toolbox\matlab\datatypes\#table?
Can I extend the table class with a new method under C:\MATLAB\#table\ismatrixlike.m? Why do I get
ismatrixlike(table)
Undefined function 'ismatrixlike' for input arguments of type 'table'.
Obviously, I did
addpath C:\MATLAB\
rehash toolboxcache
I also tried clear all.
The path has (alphabetic) precedence over matlabroot, but is missing a table.m class definition. If I add the native class defition to C:\MATLAB\#table, then I can run my new method (after a clear all). However:
>> methods(table)
Methods for class table:
classVarNames ismatrixlike table varfun
convertColumn renameVarNames unstack
only lists the methods in the new \#table folder, even though (some of) the old methods still work, e.g.
size(table)
This partly solves the problem, since now, the native \#table\private folder is not accessible anymore and therefore many native methods are broken!
Why am I doing this? Because I do not want to wait another 2 years before the table() is fixed. I already lost entire days because I simply forgot a ; in the command window and I cannot force a restart on my pc if it is running multiday simulations, but I have to wait for the disk-swap to end :(.
APPENDIX
More context about disp(table(rand(1e7,1))). This is what happens when I hit it (and luckily I am fast enough to CTRL-C out of it):
The culprit is line 172 of table.disp() which converts the numeric array into a cellstring (with the padding too!):
[cells, err, isLeft] = sprintfc(f, x, b);
After experimenting with several alternatives, I adopted the solution that intereferes the least with Matlab's native #table implementation and it's easily removed if things go awry.
The solution:
copy the whole #table folder, i.e. fullfile(matlabroot,'toolbox','matlab','datatypes','#table'), into a destination where you have write permissions.
I picked the destination to be fullfile(matlabroot,'toolbox','local','myfiles') since I do not have to bother with OS cross-compatibility, i.e. matlabroot takes care of that for me.
paste into the destination your #table folder with the new, overloaded and overriding methods (partially overwriting the copied original files)
add the destination to the matlab path, before the original #table, e.g. addpath your_destination -begin
Effects, pros and cons:
The native #table class/methods are now shadowed, try e.g. which table -all. However, this effect is quite clear, easily detectable and easily removed (delete destintation and remove path);
No weird conflicts between native #table (now shadowed) and new #table;
All methods, new and old, are visible, try methods(table);
Private table methods are accessible...
... but you are forced to use them.
Exposing the new methods (user-implemented) to the private ones requires more maintenance and direct handling of version conflicts in the table implementations.
You need write permissions on some eligible destination.
For those interested about the details, you can look into, https://github.com/okomarov/tableutils. Specifically the install_tableutils (the readme might not be updated).
The following works for me:
Define a modified disp function, say disp_modified.m, as follows, and put it in your path:
function disp_modified(t)
if istable(t)
%// Do whatever you want to display tables
builtin('disp', '''disp'' function intercepted!')
else
%// For non-tables, call `disp` normally
builtin('disp', t)
end
Define disp as a function handle to the modifed function (you can do that in startup.m to always have it by default):
disp = #disp_modified;
After this, in the command window I get
>> disp(1:5)
1 2 3 4 5
>> disp({1 2 3 'bb'})
[1] [2] [3] 'bb'
>> disp(table(rand(1e3,1)))
'disp' function intercepted!
Depending on the usage of the new class perhaps you could follow a cleaner approach. The proposed approach described in your post has the drawback that perhaps code used in your updated environment would not be easily portable to a new environment, or a program executed in your environment may demonstrate different behavior in a different environment.
Some questions you could consider (and perhaps clarify) would be: How do you intend to use the new class? Do you want to replace all the existing table uses? Do you want to be able to use it instead of a table class argument? Or do you want to alter the table so that each usage of the original table class in your environment uses the new class.
If you just need a new improved table for your usage, you could consider encapsulating the original table class in a new class. E.g MyTable, delegate all the methods you do not need to the original table methods, replace the methods you would like to improve or add new ones.
Update: Just saw the complete solution in Github and understood what you intended to do. Nice work. I will leave the post in case anyone finds it useful.
I have a fortran project whith some name conflicts (from doxygen's point of view). Sometimes a local variable in a procedure may have the same name as a subroutine or function. For compilation/linking there are no problems, as the different definitions live separate lives, for instance:
progA/main.f defines and uses the variable delta.
libB/delta.f defines a function named delta.
progB/main.f uses the function delta defined in libB.
progB is linked with libB, progA is not linked with libB.
In this case, when generating call/caller graphs, or linked source code, the variable delta in progA/main.f will be identified as the function delta. Is there some combination of doxygen settings I can use to inform it that progA is not supposed to use definitions in libB, or something similar?
Another issue is that I may have functions/subroutines with the same name in different subdirectories. Again, as long as they are not linked together this does not represent a problem for compilation, but doxygen cannot identify which of them is meant in links, calls, etc. Is there some work around this (without renaming procedures, that is)?
I have created a project on identifying malicious files using an artificial neural network. I am giving some selected features from PE structure as inputs to the neural network, and it is classifying files correctly. But referring to this answer : "https://security.stackexchange.com/questions/37921/windows-pe-file-and-malwares";it said that code can be injected into PE and values in optional header can be changed! I wanted to know if there is any way to know if PE structure has been modified?
One more link about injecting code into PE file: http://www.codeproject.com/Articles/12532/Inject-your-code-to-a-Portable-Executable-file
You can't know if the pe was modified if you don't have the orignal binary but each compiler or packer have a signature (you can look with rdg for example http://www.rdgsoft.net/) you can use it to see if this signature is not here anymore but it possible that the signature is here even if the binary was modified.
Else you can look if the binary have strange section or if some value in the structure is not logical
you can see also if each section have right protection .text -> execute etc.
if you want to learn more about it you look this link
https://github.com/katjahahn/PortEx/tree/master/masterthesis
You can read the difference strategy (appending to the orignal binary or prependeing or divided in multi part) of a malware and how to detect it.
for instance
Which functions you should found in the import table LoadLibrary, GetProcAddress etc.
and his tools to test these methods in practice:
https://github.com/katjahahn/PortEx
I would like my Simulink Level 2 S function to sequentially run a series of test cases. Each test case populates a struct containing multiple numerical arrays.
I am currently trying to achieve the above in two steps:
Step 1: generate test cases using a M file, save to Workspace as an array of structs
Step 2: read the array of structs from the Workspace into my model, using a Level 2 M
file S function to process the test cases.
Step 2 is problematic for me, in that I cannot figure out a way to get the S-function block to accept the array-of-structs variable from the Workspace as input. I want to try avoiding the simin method (another Stackoverflow discussion, here), because it seems to require representing the entire structure as a single data column, and I would like to keep the struct intact. Also tried using a Constant block with the struct array as the variable name, but that returns an 'Invalid setting for blockname parameter Value' for the block.
Would appreciate any suggestions for getting this set up correctly. Also open to a different method of building the model, if absolutely necessary. Thanks!
EDIT: Realized that I can import the data within the S function M file itself, using load. This works for the purposes of my project. However, am still interested in knowing whether a conventional solution exists for this.
If you just want to access the workspace, I would consider using evalin(caller,'expression') inside you M-file S-function:
mystruct = evalin('base','MyStructFromWorkspace');
/* (process mystruct) */
It should also do the trick.
For my work, I sometimes have to deal with logfiles from a binary protocol (the logfiles contain hexdumps of the messages). I want to write a Perl script that can interpret the binary data for me and print the contents in a more friendly format.
I have a (machine readable) description of the protocol messages in a proprietary format and I have (mostly) figured out how to parse that format (the parts I can"t fully understand are not related to my goal, so I can just ignore them), so I can convert the description into a data structure for use in my script.
Because the protocol description only rarely changes, it seems a waste to re-parse the protocol description each time I want to analyse a logfile, but on the other hand, if the description does change or if I accidentally throw away my pre-parsed form of the description, then I would like my script to automatically trigger a re-parsing of the description.
What is the best way to realise this?
Assuming that the protocol description lives in a file accessible to the script, have a function to read in the parsed data which caches the parsed results in intermediate file. The logic is very very simple but the steps look very verbose since I tried to write out the full spec - in reality it should take <10 lines of Perl code.
Check if intermediate file exists. If it does not (or can not be read), skip to proprietary parsing step (#4)
If you can read in the intermediate cache file, read in the "protocol description timestamp" field (described below). Then find out modification time of "protocol description" file via stat() and compare. If modification time of "protocol description" file is >= cache file's stored timestamp, skip to proprietary parsing step (#4)
Else (e.g. the time of "protocol description" file is < cache file's stored timestamp), read the intermediary cache file data via Data::Dumper or Storable. End.
If you need to re-parse because of logic in #1 or #2, read in "protocol description" file, parse it into your data structure.
Then create a hash with 2 keys: "protocol_description_timestamp" (with the value being the modification time of protocol description file derived from stat call) and second key "data", with the value being a reference to the data structure you just produced as a result of parsing.
Then save that data structure into the intermediate cache file using Storable or Data::Dumper or any other method of your choice for storing Perl data structires.
You can use a Makefile for this. Make the data structure you use a Makefile target that depends on the protocol description.
When Make notices that the protocol was updated more recently than the script, it will run the commands you specify to recreate your data.