How to do a "kill_proc()" in Linux Kernel 2.6.31.5 - linux-device-driver

Trying this free forum for developers. I am migrating a serial driver to kernel 2.6.31.5. I have used various books and articles to solve problems going from 2.4
Now I have a couple of kill_proc that is not supported anymore in kernel 2.6.31.5
What would be the fastest way to migrate this into the kernel 2.6.31.5 way of killing a thread. In the books they say use kill() but it does not seem to be so in 2.6.31.5. Using send_signal would be a good way, but how do I do this? There must be a task_struct or something, I wich I could just provide my PID and SIGTERM and go ahaed and kill my thread, but it seems more complicated, having to set a struct with parameters I do not know of.
If anyone have a real example, or a link to a place with up to date info on kernel 2.6.31 I would be very thankful. Siply put, I need to kill my thread, and this is not suppose to be hard. ;)
This is my code now:
kill_proc(ex_pid, SIGTERM, 1);
/Jörgen

For use with kthreads, there is now kthread_stop that the caller (e.g. the module's exit function) can invoke. The kthread itself has to check using kthread_should_stop. Examples of that are readily available in the kernel source tree.

Related

Porting word2vec to RISC-V.. potential proxy kernel issue?

We are trying to port word2vec to RISC-V. Towards this end, we have compiled word2vec with a cross compiler and are trying to run it on Spike.
The cross compiler compiles the standard RISC-V benchmarks and they run without failure on Spike, but when we use the same setup for word2vec, it fails with "bad syscall #179!". We tried two different versions, both fail around the same place a minute or two into the run while executing these instructions. After going through the loop several 100k times, we see C1, C2 printed an then the crash. We are thinking this is more of a spike/pk issue than a word2vec issue.
Has anyone had similar experiences when porting code to RISC-V? Any ideas on how we might track down whether it's the proxy kernel?
A related question is about getting gdb working with Spike.. will post that separately.
Thank you.
The riscv-pk does not support all possible syscalls. You'll need to track down which syscall it is and whether you can implement it in riscv-pk or if you need to move to running it on a different kernel. For example, riscv-pk does not support any threading-related syscalls as multithreaded kernel support is an explicitly riscv-pk non-goal.
I would also be wary of using riscv-pk in general. It's a very simple, thin kernel which is great for running newlib user applications in the beginning, but it lacks rigorous testing and validation efforts against it, so running applications that stress virtual memory systems, rely on lots of syscalls (iotcl and friends), or are expecting more glibc-like environments may prove problematic.

how can I call Unix system calls interactively?

I'd like to play with Unix system calls, ideally from Ruby. How can I do so?
I've heard about Fiddle, but I don't know where to begin / which C library should I attach it to?
I assume by "interactively" you mean via irb.
A high-level language like Ruby is going to provide wrappers for most kernel syscalls, of varying thickness.
Occasionally these wrappers will be very thin, as with sysread() and syswrite(). These are more or less equivalent to read(2) and write(2), respectively.
Other syscalls will be hidden behind thicker layers, such as with the socket I/O stuff. I don't know if calling UNIXSocket.recv() counts as "calling a syscall" precisely. At some level, that's exactly what happens, but who knows how much Ruby and C code stands between you and the actual system call.
Then there are those syscalls that aren't in the standard Ruby API at all, most likely because they don't make a great amount of sense to be, like mmap(2). That syscall is all about raw pointers to memory, something you've chosen to avoid by using a language like Ruby in the first place. There happens to be a third party Ruby mmap module, but it's really not going to give you all the power you can tap from C.
The syscall() interface Mat pointed out in the comment above is a similar story: in theory, it lets you call any system call in the kernel. But, if you don't have the ability to deal with pointers, lay out data precisely in memory for structures, etc., your ability to make useful calls is going to be quite limited.
If you want to play with system calls, learn C. There is no shortcut.
Eric Wong started a mailing list for system-level programming in Ruby. It isn't terribly active now, but you can get to it at http://librelist.com/browser/usp.ruby/.

How do OSes Handle context switching?

As I can understand, every OS need to have some mechanism to periodically check if it should run some tasks and suspend others.
One way would be some kind of timer on whose expiry the OS will check if it should run/suspend some task.
Generally, say on a ARM system that would probably be some kind of ISR.
My real question, is that I've been ABLE to only visualize this and not see it somewhere. Could some one point to some free/open RTOS code where I can actually see the code that handles the preemption/scheduling?
freertos.org. The entire OS is open source, and right there for you to see. And there are dozens of different ports to compare and contrast. For the context switch code, you will want to look in the ports directory, in any one of many files called port.c, port.asm, etc. And yes, in the case of freertos all context switches are performed in interrupts (a tick timer ISR, or any other SysCall interrupt).
A context switch is very-much processor specific, as the list of registers to save and the assembly code to save them varies between processor families, and sometimes within a given family. As a result each port has a separate file for this code.
The scheduling (selection of next task to run), on the other hand, is done in a file called tasks.c, which is common to all ports and references the port-specific code.
It is not the case than an RTOS simply context switches periodically - that is how most GPOS work. In an RTOS the scheduler runs on any scheduling event. These include system-tick, but also message post, event trigger, semaphore give, or mutex unlock for example.
On ARM Cortex-M the CMSIS 3.x includes an RTOS API (intended primarily for RTOS developers rather than a complete RTOS itself), the source for this will include a context switching mechanism.
If you want a detailed description for a simple RTOS you might consider reading µC/OS-II: The Real-Time Kernel or the slightly more sophisticated µC/OS-III: The Real-Time Kernel .
FreeRTOS is increasingly popular, though perhaps a little unconventional architecturally. A more complete (in that it is not just a scheduling kernel but a more complete OS) and very powerful option is eCos.
You can take a look at xv6.
Its not an RTOS, it is just a skeleton OS(based on V6 unix) meant for academic purpose.
In the XV6 book take a look at chapter 4, there is explanation along with the code as to how scheduling is done on a small OS like xv6.XV6 puts a process to sleep when it is waiting for disk or some I/O operation, there is also timer interupt every 100msec to switch a process.
There is also explanation with code on how the context switching takes place, what information is saved( context frame of a process), how the switch from user to kernel mode happens when the scheduler has to run.
The best part is that the amount of reading you have to do to understand these concepts is very less unlike some reference book on OS :) The code is relatively small, you can infact run the XV6 on qemu set breakpoints in the sched , swtch and other functions and actually see the information saved during a context switch.(how to run xv6 in this link)
You dont have to read previous chapters to understand the chapter4. There isnt much dependency,xv6 uses struct proc to identify a process, ptable for all the current running process in the system, proc->conext -refers to the state the process is in (register value etc) , this is saved by the scheduler.
Cheers :)

Speed improvements for Perl's chameneos-redux in the Computer Language Benchmarks Game

Ever looked at the Computer Language Benchmarks Game (formerly known as the Great Language Shootout)?
Perl has some pretty healthy competition there at the moment. It also occurs to me that there's probably some places that Perl's scores could be improved. The biggest one is in the chameneos-redux script right now—the Perl version runs the worst out of any language: 1,626 times slower than the C baseline solution!
There are some restrictions on how the programs can be made and optimized, and there is Perl's interpreted runtime penalty, but 1,626 times? There's got to be something that can get the runtime of this program way down.
Taking a look at the source code and the challenge, how can the speed be improved?
I ran the source code through the Devel::SmallProf profiler. The profile output is a little too verbose to post here, but you can see the results yourself using $ perl -d:SmallProf chameneos.pl 10000 (no need to run it for 6000000 meetings unless you really want to!) See perlperf for more details on some profiling tools in Perl.
It turns out that using semaphores is the major bottleneck. The lion's share of total CPU time is spent on checking whether a semaphore is locked or not. Although I haven't had enough time to look at why the source code uses semaphores, it may be that you can work around having to use semaphores altogether. That's probably your best shot at improving the code's performance.
As Zaid posted, Thread::Semaphore is rather slow. One optimization could be to use the implicit locks on shared variables instead of them. It should be faster, though I suspect it won't be faster by much.
In general, Perl's threading implementation sucks for any kind of usage that requires a lot of interthread communication. It's very suitable for tasks with little communication (as unlike CPython's threads and CRuby's threads they are actually preemptive).
It may be possible to improve that situation, we need better primitives.
I have a version based on another version from Jesse Millikian, which I think was never published.
I think it may run ~ 7x faster than the current entry, and uses standard modules all around. I'm not sure if it actually complies with all the rules though.
I've tried the forks module on it, but I think it slows it down a bit.
Anyone tried s/threads/forks/ on the Perl entry? Or Coro / Coro::MP, though the latter would probably trigger the 'interesting alternative implementations' clause.

socket passing between processes

Is there a way to pass a socket between processes (not same address space) in Windows?
I find this info Shared Sockets, therefore believe that it is possible.
"The WSADuplicateSocket function is introduced to enable socket sharing across processes"...
More info : at source code of Apache (httpd-2.2.11-win32-src.zip) have usage for api WSADuplicateSocket with comments.
[EDIT]
Recently I find this great sample about this question.
How duplication is done in the unmanaged world - Socket Duplication - Part 1
Is it possible to transfer sockets from unmanaged processes? - Socket Duplication - Part 2
See the Remarks section of WSADuplicateSocket. It effectively says you can use Your Favorite Interprocess Communication Scheme to send the WSAPROTOCOL_INFO structure (it's just data!) to the target.
There are lots of IPC schemes. I'd probably use shared memory with Boost::interprocess. But you could use SendMessage if the target has a window + message loop. Or the Clipboard API, for that matter (though somewhat weird). The mechanism is your choice.
If you're creating the child process there are some things that might do it for you. See
http://www.tangentsoft.net/wskfaq/articles/passing-sockets.html
(I know that this one worked in the ancient past; no idea if it works on current versions)
http://msdn.microsoft.com/en-us/library/ms682499.aspx
-- MarkusQ