Which addressing mode permits relocation without any change whatsoever in the code? - operating-system

The options are :
Indexed addressing
Base register addressing
PC relative addressing
Indexed and Base register addressing both work by adding the content of their respective register (Index / Base register) to the address mentioned in the address code.
[Though the subtle difference is Index register has its content as "index" of the array while the Base register has its content as "base" address of the array]
To make the code relocatable, only the content of the Base / Index register needs to be changed, but that too can only be accomplished by executing some additional code.
PC relative mode just references the other instructions relative to the current PC contents.
So, is option 3 the best answer ?
Thanks !!

There are multiple parts:
- Data: this is about loading values that are in a binary
- Code: this is about jumps and calls
Data doesn't matter much. Code is more interesting.
In the absolute case you call . You write the code so is right most of the time. If it is not, the loader patches it right there in the code.
In the relative case, you have to call which then calls So one extra hop.
In the end 3 is more flexible but might have a small runtime overhead.
Another reason why PC relative all the time can have extra costs is that the PC distance might not be the whole address space.
Absolute addressing can be a small optimization. But also costs more to start when things go wrong.

Related

Confusions about address binding

Compile time. If you know at compile time where the process will reside
in memory, then absolute code can be generated. For example, if you know
that a user process will reside starting at location R, then the generated
compiler code will start at that location and extend up from there. If, at
some later time, the starting location changes, then it will be necessary
to recompile this code. The MS-DOS .COM-format programs are bound at
compile time.
What can be the reason of the starting location to change? Can it be
because of context switching/swapping ?
Does absolute code means binary code?
Load time. If it is not known at compile time where the process will reside
in memory, then the compiler must generate relocatable code. In this case,
final binding is delayed until load time. If the starting address changes, we
need only reload the user code to incorporate this changed value.
How is relocatable code different from absolute code? Does it contain info about base,limit and relocation register?
How is reloading more efficient then recompiling as they mentioned only reload means no recompiling only reload?
Execution time. If the process can be moved during its execution from
one memory segment to another, then binding must be delayed until run
time. .
Why it may be needed to move a process during it's execution?
The compile-time and load-time address-binding methods generate
identical logical and physical addresses. However, the execution-time address-binding scheme results in differing logical and physical addresses.
How compile and load-time methods generate identical logical and physical addresses?
To begin with, I would find a better source for your information. What you have is very poor.
What can be the reason of the starting location to change? Can it be because of context switching/swapping ?
You change the code or need the code to be loaded at a different location in memory.
Does absolute code means binary code?
No. They are independent concepts.
How is relocatable code different from absolute code? Does it contain info about base,limit and relocation register?
Relocatable code uses relative addresses, generally relative to the program counter.
(Base limit and relocation registers would be a system specific ocncept).
How is reloading more efficient then recompiling as they mentioned only reload means no recompiling only reload?
Let's say two different programs use the same dynamic library. They made need to have loaded at different locations in memory. It's not an efficiency issue.
Why it may be needed to move a process during it's execution?
This is what was done in ye olde days before virtual memory. To my knowledge no one does this any more.
How compile and load-time methods generate identical logical and physical addresses?
I don't know what the &^54 they are talking about. That statement makes no sense.
Dynamic libraries (.dll .so) are relocatable, because they might appear at different adresses in different applications, but in order to save memory, the operating system only has one copy in physical memory (virtual memory is great), and each application has read only access.
Same happens for applications that are relocatable. For security, it is also wize that the addresses are random - some remote attacks are slighty harder

OS - Difference between NX bit and Valid-Invalid Bit

What i understand:
Setting the NX bit high for a page will mean that whenever some program tries to execute the code in that page, it will cause a segmentation fault.
There is also a Valid-Invalid bit, and accessing a page marked Invalid will also cause a segmentation fault.
So the question is:
Why is the NX bit not redundant when you already have a valid-invalid bit ? What does it mean to mark a page both Valid and NX ?
First, I am going to change the terminology slightly. Instead of using terms like valid/invalid, I am going to replace them with present/not present.
A page that is marked as present can be accessed in some fashion (read, write or execute). A page that is marked as not present is not directly accessible (if at all). If it can be accessed, it must first be loaded from some type of backing store (disk/flash/...) into memory.
The NX bit only controls whether the page of memory has execute permissions. Why is controlling the execute permissions important? It helps in locking down the system to help prevent the execution of arbitrary code.
So, a page that is marked as both NX and not present is one that does not have execute permissions, but may need to be loaded from backing store.
If you do not have an NX bit (especially on your data pages), then if some clever hacker figures out how to jump to code in the data section while in supervisor/kernel mode, it makes their job much easier to execute arbitrary code on your system.
Hope this helps.

Make signal names coming from library links unique?

OK, I've been struggling with this for a while. What is the best way to accomplish the following:
where Reaction Wheel 1-4 are links to the same block in a library. When the Speed Counter, Speed Direction and Current signals are added to the final bus output as shown, MATLAB (rightfully) complains:
Warning: Signals 9, 10, 11, 12 entering Bus Creator
'myAwesomeModel' have duplicated names 'Current'. These are being made unique
by appending "(signal #)" to the signals within the resulting bus. Please
update the labels of the signals such that they are all unique.
Until now I've been using a "solution" like this:
that is, place a size-1-mux/gain-of-1/other-dummy block in the middle, so the signals can be renamed into something unique. However, I really like to believe that The MathWorks has thought of a better way to do this...
What is the "proper" way to construct bus signals like this? It feels rather like I'm being pushed to adopt a particular design/architecture, but what that is precisely, eludes me for the moment...
It was quite a challenge for me but looks like I kinda sorted it out. Matlab R2007a here. I'll do the example with an already done subsystem, with its inputs, outputs, ...
1- In Block Properties, add a tag to the block. This will be done to identify the block and its "siblings" among the system. MY_SUBSYSTEM for this example.
2- Block Properties again. Add the following snippet in CopyFcn callback:
%Find total amount of copies of the block in system
len = length(find_system(gcs,'Tag','MY_SUBSYSTEM'));
%Get handle of the block copied/added and name the desired signal accordingly
v = get_param(gcb,'PortHandles');
set(v.Outport(_INDEX_OF_PORT_TO_BE_RENAMED_),'SignalNameFromLabel',['BASENAME_HERE' num2str(len)]);
3- In _INDEX_OF_PORT_TO_BE_RENAMED_ you should put the port signal index (starting from 1) that you want to have renamed for each copy of the block. For a single output block this should be 1. BASENAME_HERE should be the port basename, in this case "Current" for you.
4- Add the block to the desired library, and delete the instance you used to create this example. From there on, as you add from the library or copy an existing block, the outport should name Current1, Current2, Current3, and so on. Notice that you could apply any convention or formatting.
Hope this helps. It worked for me, don't hesitate to ask/criticize!
Note: Obviously, as the model grows, this method may be computer-demanding as find_system will have to loop through the entire model, however looks like a good workaround for me in small-medium sized systems.
Connect a Bus Selector to each Data Output. Select the signals you want and set "Output as bus". Then connect all Bus Selectors to a Bus Creator.
simulink model

$readmemh to load subblocks of memory

Sorry if the question is too newbie. I have been looking for information on readmemh and I haven't been able to find a suitable solution to my problem.
I have a large hex file with the contents of a memory to read from it. But since the file is just too large I want to be able to bring to my program just a smaller memory block every time. For example, if my memory has addresses 32 bits long, being able to load only chunks of 2048 addresses and storing in a variable the upper part to know if I'm in the right chunk or not.
Or more or less, if my hex file is as follows:
#00000000 1212
#00000001 3434
#00000002 5656
...
#ffffffff 9a9a
I want to be able to save it in a structure with the following fields:
logic [15:0] chunck [11:0]; // Keeps 2048 entries
logic [20:0] address_top; // Keeps the top part of the address
Looking at the documentation, I see that readmemh allows me to indicate a start and an end address, but what I am seeing is that it refers to the position in the destination array, not from the source.
How can I do it?
Thanks in advance!
If you are using SystemVerilog, then you can use an associative array to model your memory, and let it take care a allocating and mapping the physical memory as needed, instead of doing it yourself.

Why syncblk is located at -4 and not at 0?

So if you want to look at sync block for an object, under sos you have to look at -4 bytes (on 32 bit machines) before the object address. Does anyone know what is the wisdom for going back 4 bytes? I mean they could have sync block at 0, then type handle at +4 and then object fields at +8.
This is an implementation detail, so I can't give you the exact reason for the placement of the syncblock. However, if you look at the shared source CLI, you'll see that the runtime has all sorts of optimizations for how objects are allocated and used, and actually the data associated with a single instance is located in several different places. The syncblock for instance is just an index value for a structure located elsewhere. Similarly the MethodTable and the EEClass are stored elsewhere. These are all implementation details. The important point IMO is understanding how to dig out the information needed during debugging. It is of much less importance to understand why the implementation details are as they are.
I'd say it matches expectations, especially for structs that have been explicitly laid out. As Brian says, it's just an implementation detail though. It's similar to how many implementations of malloc will allocate more space than requested, store the allocation size in the first four (or eight) bytes, and then return a pointer that is offset to point to the next byte beyond that.