How to generate random number within kernel of FreeBSD at root level? - operating-system

For my Operating System course, I implementing lottery scheduling algorithm instead of what was already given for FreeBSD.
In order to implement lottery scheduling, I have to be able to use random number. However, I can't use C standard library (that uses rand function) within kernel of FreeBSD. `
I am modifying two .c file from root (sched_ule.c and kern_switch.c) that's in /sys/kern and trying to make random variable within sched_ule.c file using random.h that is in /sys/sys
As of now, I'm hoping to make small step and get random number to be printed out after running make buildkernel and rebooting.

Implement your own Pseudo RNG. C Rand does not generate secure randomness, so you could do the same.
If you are on post-ivybridge intel-x86, you could just execute rdrand (which is a hack, but would work fine). I suspect other archs have a similar instruction or mechanism.
Use FreeBSD's randomness functions. It almost certainly has a randomness extractor implementation.

Related

Confusions about address binding

Compile time. If you know at compile time where the process will reside
in memory, then absolute code can be generated. For example, if you know
that a user process will reside starting at location R, then the generated
compiler code will start at that location and extend up from there. If, at
some later time, the starting location changes, then it will be necessary
to recompile this code. The MS-DOS .COM-format programs are bound at
compile time.
What can be the reason of the starting location to change? Can it be
because of context switching/swapping ?
Does absolute code means binary code?
Load time. If it is not known at compile time where the process will reside
in memory, then the compiler must generate relocatable code. In this case,
final binding is delayed until load time. If the starting address changes, we
need only reload the user code to incorporate this changed value.
How is relocatable code different from absolute code? Does it contain info about base,limit and relocation register?
How is reloading more efficient then recompiling as they mentioned only reload means no recompiling only reload?
Execution time. If the process can be moved during its execution from
one memory segment to another, then binding must be delayed until run
time. .
Why it may be needed to move a process during it's execution?
The compile-time and load-time address-binding methods generate
identical logical and physical addresses. However, the execution-time address-binding scheme results in differing logical and physical addresses.
How compile and load-time methods generate identical logical and physical addresses?
To begin with, I would find a better source for your information. What you have is very poor.
What can be the reason of the starting location to change? Can it be because of context switching/swapping ?
You change the code or need the code to be loaded at a different location in memory.
Does absolute code means binary code?
No. They are independent concepts.
How is relocatable code different from absolute code? Does it contain info about base,limit and relocation register?
Relocatable code uses relative addresses, generally relative to the program counter.
(Base limit and relocation registers would be a system specific ocncept).
How is reloading more efficient then recompiling as they mentioned only reload means no recompiling only reload?
Let's say two different programs use the same dynamic library. They made need to have loaded at different locations in memory. It's not an efficiency issue.
Why it may be needed to move a process during it's execution?
This is what was done in ye olde days before virtual memory. To my knowledge no one does this any more.
How compile and load-time methods generate identical logical and physical addresses?
I don't know what the &^54 they are talking about. That statement makes no sense.
Dynamic libraries (.dll .so) are relocatable, because they might appear at different adresses in different applications, but in order to save memory, the operating system only has one copy in physical memory (virtual memory is great), and each application has read only access.
Same happens for applications that are relocatable. For security, it is also wize that the addresses are random - some remote attacks are slighty harder

Specman beginner's questions

I am new to Specman.
I have a couple of questions:
I am trying to use the agent methodology. After writing the env,agent,bfm etc - what is the recommended way to create clock and reset? by writing a tb.v (calling the top verilog module) or is there a better way?
How do I link the specman env file to the tb (or maybe its just enough to link the ports of the different specman files with a signals_map to the verilog files?
Most important how do I run the environment with irun?
I was thinking of creating a file listing all the verilog files, e.g. - veri.lst
the specman top shall import all the specman files, e.g - spec_top.e
irun -access +wrc veri.lst spec_top.e
should be ok?
should I mention the top level module in the command?
Should I put the test name in a special way in the command?
Thanks alot for all the help!!
Cadence recommends driving clocks from inside an HDL testbench (i.e. written in Verilog in your case). This is because every time the simulator yields control to Specman to execute it wastes processor time. You want to minimize the number of switches as much as possible.
Linking the env to the TB is done by connecting the Verilog signals of interest to the corresponding Specman ports (using hdl_path()).
W.r.t. running it, there are 2 things to keep in mind. e code can be executed in compiled or in interpreted mode. Also, compiled code is faster, but can't be debugged. You have to tell irun what you want compiled and what you want interpreted:
irun -f veri.lst \
compiled_top.e \
-snload interpreted_top.e
What you typically compile are files which you don't expect to change (verification components that you buy or reuse from other projects, for example). The rest of your files you'd load interpreted to be able to easily debug.
Adding to Tudor's great answer -
First - yes, connecting The e TB to the DUT is done using hdl_path(), and connecting the ports to external. You usually would have one unit designated for the interface, so configuring it would look something like this:
extend signal_map {
// name of the instance of the verilog module you interface
keep hdl_path() == "sub_system_a";
keep bind (sig_clock, external);
// name of the clock signal
keep sig_clock.hdl_path == "clk";
};
Please take a look in the IES release, at the UVM Examples.
They are in
specman/uvm/uvm_examples
For example, check out the specman/uvm/uvm_examples/xserial/e/xserial_collector_h.e:
And about the clock -
Connecting a clock in the e TB to the design is very simple. Something like this -
unit synch {
sig_clock : in simple_port of bit is instance;
keep bind(sig_clock, external);
event clock is rise(sig_clock$) #sim;
// can define also on fall or change
};
Now the clock event can be used as sampling event for TCMs and Temporals. This is a simple fast way for using the clock in the TB.
Another way to use the clock, is more "acceleration ready". In this methodology, you would implement a clock agent in verilog, and it will provide "clock services" to the TB. According to this methodology, the TB will not have any "wait cycles" in it. instead - it will call the Clock Agent task "wait_cycles()" - and wait for indication that required number of clock cycles passed.
This is a rather new methodology, oriented to be Acceleration Ready.
It will be demonstrated in the UVM Examples in next IES release, 15.1.
/efrat

problems with implementation of 0000-9999 counter on fpga(seven segment)

EDIT1
okay i couldnt post a long comment(i am new to the website so please accept my apologies) so i am editing my earlier question. I have tried to implement multiplexing in 2 attempts:
-2nd attempt
-3rd attempt
in 2nd attempt i have tried to send the seven seg variables of each module to the module which is just one step ahead of it, and when they all reach the final top module i have multiplexed them...there is also a clock module which generates a clock for the units module(which makes units place change 2 times in a second) and a clock for multiplexing(multiplexing between each displays 500 times per second)...ofcourse i read that my board has a clock freq of 50M hertz, so these calculations for clocks are based on that figure...
in the 3rd comment i have done the same thing, in one single module. see the 2nd attempt first and then the 3rd one.
both give errors right after synthesis and lots of unfamiliar warnings.
EDIT 2
I have been able to synthesize and implement the program in attempt4(which i am not allowed to post since my reputation is low), using the save flag for variables, variables1 variables2 and variables3(which were giving warning of unused pins) but the program doesnt run on fpga...it simply shows the number 3777. also there are still warnings of "combinatorial loops" for some things that are related to some variables( i am sorry i am new to all this verilog thing) but you can see all of them in attempt 3 as well.
You can not implement counters with loops. Neither can you implement cascaded counters with nested loops.
Writing HDL is not writing software! Please read a book or tutorial on VHDL or Verilog on how to design basic hardware circuits. There is also the Synthesis and Simulation Guide 14.4 - UG626 from Xilinx. Have a look at page 88.
Edit1:
Now it's possible to access your zip file without any dropbox credentials and I have looked into your project. Here are my comments on your code.
I'll number my bullets for better reference:
Your project has 4 mostly identical ucf files. The difference is only in assigning different anode control signals to the same pin location. This will cause errors in post synthesis steps (assign multiple nets to one pin). Normally, simple projects have only one ucf file.
The Nexsys 2 board has a 4 digit 7-segment display with common cathodes and switchable common anodes. In total these are 8+4 wires to control. A time multiplexing circuit is needed to switch at 25Hz < f < 1kHz through every digit of your 4-digit output vector.
Choosing a nested hierarchy is not so good. One major drawback is the passing of many signals from every level to the topmost level for connecting them to the FPGA pins. I would suggest a top-level module and 4 counters on level one. The top-level module can also provide the time-multiplexing circuit and the binary to 7-seg encoding.

VHDL simulation in real time?

I've written some code that has an RTC component in it. It's a bit difficult to do proper emulation of the code because the clock speed is set to 50MHz so to see any 'real time' events take place would take forever. I did try to do simulation for 2 seconds in modelsim but it ended up crashing.
What would be a better way to do it if I don't have an evaluation board to burn and test using scope?
If you could provide a little more specific example of exactly what you're trying to test and what is chewing up your simulation cycles that would be helpful.
In general, if you have a lot of code that you need to test in simulation, it's helpful if you can create testbenches of the sub-modules and test them first. Often, if you simulate at the top (chip) level and try to stimulate sub-modules that are buried deep in the hierarchy of a design, it takes many clock ticks just to get data into and out of the sub-module. If you simulate the sub-module directly you have direct access to the modules I/O and can test the things you want to test in that module in fewer cycles than if you try to get to it from the top level.
If you are trying to test logic that has very deep fifos that you are trying to fill or a specific count of a large counter you're trying to hit, you can either add logic to your code to help create those conditions in fewer cycles (like a load instruction on the counter) or you can force the values of internal signals of your design from the testbench itself.
These are just a couple of general ideas. Again, if you provide more detail about what it is you're simulating there are probably people on this forum that can provide help that is more specific to your problem.
As already mentioned by Ciano, if you provided more information about your design we would be able to give more accurate answer. However, there are several tips that hardware designers should follow, specially for complex system simulation. Some of them (that I mostly use) are listed below:
Hierarchical simulation (as Ciano, already posted): instead of simulating the entire system, try to simulate smaller set of modules.
Selective configuration: most systems require some initialization processes such as reset initialization time, external chips register initialization, etc... Usually for simulation purposes a few of them are not require and you may use a global constant to jump these stages when simulating, like:
constant SIMULATION_ENABLE : STD_LOGIC := '1';
...;
-- in reset condition:
if SIMULATION_ENABLE = '1' then
currentState <= state_executeSystem; -- jump the initialization procedures
else
currentState <= state_initializeSystem;
end if;
Be careful, do not modify your code directly (hard coded). As the system increases, it becomes impossible to remember which parts of it you modified to simulate. Use constants instead, as the above example, to configure modules to simulation profile.
Scaled time/size constants: instead of using (everytime) the real values for time and sizes (such as time event, memory sizes, register file size, etc) use scaled values whenever possible. For example, if you are building a RTC that generates an interrupt to the main system every 60 seconds - scale your constants (if possible) to generate interrupts to about (6ms, 60us). Of course, the scale choice depends on your system. In my designs, I use two global configuration files. One of them I use for simulation and the other for synthesis. Most constant values are scaled down to enable lower simulation time.
Increase the abstraction: for bigger modules it might be useful to create a simplified and more abstract module, acting as a model of your module. For example, if you have a processor that has this RTC (you mentioned) as a peripheral, you may create a simplified module of this RTC. Pretending that you only need the its interrupt you may create a simplified model such as:
constant INTERRUPT_EVENTS array(1 to 2) of time := (
32 ns,
100 ms
);
process
for i in 1 to INTERRUPT_EVENTS'length loop
rtcInterrupt <= '0';
wait for INTERRUPT_EVENTS(i);
rtcInterrupt <= '1';
wait for clk = '1' and clk'event
end for
wait;
end process;

VHDL Bus Functional Modelling - Can't put groups of procedures into a package to clean up the code

I want to organize a working bus functional model and push commonly used procedures (which look like CPU subroutines) out into a package and get them out of the main cpu model, but I'm stuck.
The procedures don't have access to the hardware bits when they're pushed out in a package.
In Verilog, I would put commonly used procedures out into an include file and link them into the CPU model as required for a given test suite.
More details:
I have a working bus functional model of a CPU, for simulation test benching.
At the "user interface" level I have a process called "main" running inside the CPU model which calls my predefined "instruction set" like this:
cpu_read(address, read_result);
cpu_write(address, write_data);
etc.
I bundle groups of those calls up into higher level procedures like
configure_communication_bus;
clear_all_packet_counters;
etc.
At the next layer these generic functions call a more hardware specific version which knows the interface timing for the design,
and those procedures then use an input record and output record to connect to the hardware module ports and waggle the cpu bus signals as required.
cpu_read calls hardware_cpu_read(cpu_input_record, cpu_output_record, address);
Something like this:
procedure cpu_read (address : in std_logic_vector(15 downto 0);
read_result : out std_logic_vector(31 downto 0));
begin
hardware_cpu_read(cpu_input_record, cpu_output_record, address, read_result);
end procedure;
The cpu_input_record and cpu_output_record are declared as signals of type nnn_record in the cpu model vhdl file.
So this is all working, but every single one of these procedures is all stored in the cpu VHDL module file, and all in the procedure declaration section so that they are all in the same scope.
If I share the model with team members they will need to add their own testing subroutines, and those also are all in the same location in the file, as well, their simulation test code has to go into the "main" process along with mine.
I'd rather link in various tests from outside the model, and only keep model specific procedures in the model file..
Ironically I can push the lowest level hardware procedure out to a package, and call those procedures from within the "main" process, but the higher level processes can't be put out into that package or any other packages because they don't have access to the cpu_read_record and cpu_write_record.
I feel like there must be a simple way to clean up this code and make it modular, and I'm just missing something obvious.
I don't really think making a command interpreter and loading my test code into a behavioral ROM is the right way to go by the way. Nor is fighting with the simulator interface to connect up a C program, but I may break down and try this..
Quick sketch of an answer (to the question I think you are asking! :-) though I may be off-beam...
To move the BFM subprograms into a reusable package, they need to be independent of the execution scope - that usually means a long parameter list for each of them. So using them in a testbench quickly gets tedious compared with the parameterless (or parameter-lite) versions you have now..
The usual workaround is to implement the BFM in a package, with long parameter lists.
Then write parameter-lite local equivalents (wrappers) in the execution scope, which simply call the package versions supplying all the parameters explicitly.
This is just boilerplate - not pretty but it does allow you to move the BFM into a package. These wrappers can be local to the testbench, to a process within it, or even to a subprogram within that process.
(The parameter types can be records for tidiness : these are probably declared in a third package, shared between BFM. TB, and synthesisable device under test...)
Thanks to overloading, there is no ambiguity between the local and BFM package versions, so the actual testbench remains as simple as possible.
Example wrapper function :
function cpu_read(address : unsigned) return slv_32 is
begin
return BFM_pack.cpu_read (
address => address,
rd_data_bus => tb_rd_data_bus,
wait => tb_wait_signal,
oe => tb_mem_oe,
-- ditto for all the signals constants variables it needs from the tb_ scope
);
end cpu_read;
Currently your test procedures require two extra signals on them, cpu_input_record and cpu_output_record. This is not so bad. It is not uncommon to just have these on all procedures that interact with the cpu and be done with it. So use hardware_cpu_read and not cpu_read. Add cpu_input_record, cpu_output_record to your configure_communication_bus and clear_all_packet_counters procedures and be done. Perhaps choose shorter names.
I do a similar approach, except I use only one record with resolved elements. To make this work, you need to initialize the record so that all elements are non-driving (ie: 'Z' for std_logic). To make this more flexible, I have created resolution functions for integer, time, and real. However, this only saves you one signal. Not a real huge win. Perhaps half way to where you think you want to be. But it is more work than what you are doing.
For VHDL-201X, we are working on syntax to allow parameters/ports automatically map to a identically named signal. This will get you to where you want to be with any of the approaches (yours, mine, or Brian's without the extra wrapper subprogram). It is posted here: http://www.eda.org/twiki/bin/view.cgi/P1076/ImplicitConnections. Given this, I would add the two records to your procedures and call it good enough for now.
Once you get by this problem, you seem to also be asking is how do I write separate tests using the same testbench. For this I use multiple architectures - I like to think of these as a Factory Class for concurrent code. To make this feasible, I separate the stimulus generation code from the rest of the testbench (typically: netlist connections and clock). My presentation, "VHDL Testbench Techniques that Leapfrog SystemVerilog", has an overview of this architecture along with a number of other goodies. It is available at: http://www.synthworks.com/papers/index.htm
You're definitely on the right track, in fact I have a variant like this (what you describe).
The catch is, now I build up a whole subroutine using the "parameter light" procedures, and those are what I want to put in a package to share and reuse. The problem is that any procedure pushed out to a package can't call to the parameter light procedures in the main vhdl file..
So what happens is we have one main vhdl file with all the common CPU hardware setup routines, and every designer's test code all in the same vhdl file..
Long story short, putting our test subroutines into separate files is really what I was hoping for..