The "Special Mask Mode" in 8259A with OCW3 (PIC : Programmable Interrupt Controller) - operating-system

The manual for 8259A says that
In the special Mask Mode, when a mask bit is set in OCW1, it inhibits further interrupts at that level and enables interrupt from all other levels (lower as well as higher level) that are not masked.
Thus, any interrupts may be selectively enabled by loading the mask register.
Do the operating system use this mode when the kernel sets up the PIC? If no then the lower priority interrupts would get a bit lower benefit. Like the keyboard gets more benefit than the FDD. Is the hardware interrupt mapping to the cascaded PIC in the system assigned/hard wired in the order of the importance.
Special Mask Mode or Normal Mask Mode, Which mode would get better performance or some benefit.
Suggestion on implementation of the Special Mask Mode when configuring the PIC for an x86 PC system would help me decide if i should use it or not.

Related

Transporterfleet not blocking eachother ANYLOGIC

In my model I've a path-guided transporter fleet, but when they are close to eachother they are blocking eachother, since this is not the scope of my model (I want them just to override eachother ) is there a way to disable this option. I've already tried to set minimum distance to obstacle very low or use very small dimensions (see figure) but everything seems not to work.
The key aspect of Material-Handling transporters is to apply that spatial blocking.
If you do not want it, use moving resources from the Process modelling library. They act the same but do not have spatial awareness. However, they also cannot do the free-space navigation. Path-guided works but not applying path-specific constraints.
So it is a trade-off. The process-library resources also require much less computational power...

Why is 'GPIO.setup(6, GPIO.IN)' throwing an error?

I'm trying to read the state of the input pin (BOARD pin 6, which is a ground pin) and I receive the error "ValueError: The channel sent is invalid on a Raspberry Pi".
Am I misunderstanding the definition of an input pin? My understanding was that it is simply the ground/negative pin, connecting back 'in' to the pi?
I'm trying to read the state purely for tinkering purposes, to see the value change when it's floating (not using a pull-down).
The Ground pin is connected, literally, to ground. It is impossible to read or write values to ground or power, as these are the circuit components. You have to connect to a GPIO pin (the green(ish? I'm colorblind) dots at http://pinout.xyz).
It is possible for the input of a GPIO pin to be set to HIGH or LOW, depending on the circuit you wish to use. If you expect the GPIO to be normally LOW and HIGH when your input is triggered (such as with a pushbutton switch), then you have to set the state to pulldown.
I would recommend you read some of the background on microcontrollers: https://embeddedartistry.com/blog/2018/06/04/demystifying-microcontroller-gpio-settings/

Stm32 spi write on rising edge and read on falling edge, possible?

i have application ic which needs to write on rising egde and read on falling edge . Write now i get both on rising egde?
I am using bidirectional mode so only 3 wires
Thanks
needs to write on rising egde and read on falling edge
Look at the SPI timing diagram in the Reference Manual. (This is for the F4 series, but AFAIK other series have compatible SPI controllers)
It does what you want when CPHA == 1 and CPOL == 0. Data lines are written at the rising edge, and captured at the falling edge of SCK.

State preserving particle system for OpenGL ES 2.0

I'm trying to implement a state preserving particle system on the iPhone using OpenGL ES 2.0. By state-preserving, I mean that each particle is integrated forward in time, having a unique velocity and position vector that changes with time and can not be calculated from the initial conditions at every rendering call.
Here's one possible way I can think of.
Setup particle initial conditions in VBO.
Integrate particles in vertex shader, write result to texture in fragment shader. (1st rendering call)
Copy data from texture to VBO.
Render particles from data in VBO. (2nd rendering call)
Repeat 2.-4.
The only thing I don't know how to do efficiently is step 3. Do I have to go through the CPU? I wonder if is possible to do this entirely on the GPU with OpenGL ES 2.0. Any hints are greatly appreciated!
I don't think this is possible without simply using glReadPixels -- ES2 doesn't have the same flexible buffer management that OpenGL has to allow you to copy buffer contents using the GPU (where, for example, you could copy data between the texture and vbo, or use simply use transform feedback which is basically designed to do exactly what you want).
I think your only option if you need to use the GPU is to use glReadPixels to copy the framebuffer contents back out after rendering. You probably also want to check and use EXT_color_buffer_float or related if available to make sure you have high precision values (RGBA8 is probably not going to be sufficient for your particles). If you're intermixing this with normal rendering, you probably want to build in a bunch of buffering (wait a frame or two) so you don't stall the CPU waiting for the GPU (this would be especially bad on PowerVR since it buffers a whole frame before rendering).
ES3.0 will have support for transform feedback, which doesn't help but hopefully gives you some hope for the future.
Also, if you are running on an ARM cpu, it seems like it'd be faster to use NEON to quickly update all your particles. It can be quite fast and will skip all the overhead you'll incur from the CPU+GPU method.

What does the Tiler Utilization statistic mean in the iPhone OpenGL ES instrument?

I have been trying to perform some OpenGL ES performance optimizations in an attempt to boost the number of triangles per second that I'm able to render in my iPhone application, but I've hit a brick wall. I've tried converting my OpenGL ES data types from fixed to floating point (per Apple's recommendation), interleaving my vertex buffer objects, and minimizing changes in drawing state, but none of these changes have made a difference in rendering speed. No matter what, I can't seem to push my application above 320,000 triangles / s on an iPhone 3G running the 3.0 OS. According to this benchmark, I should be able to hit 687,000 triangles/s on this hardware with the smooth shading I'm using.
In my testing, when I run the OpenGL ES performance tool in Instruments against the running device, I'm seeing the statistic "Tiler Utilization" reaching nearly 100% when rendering my benchmark, yet the "Renderer Utilization" is only getting to about 30%. This may be providing a clue as to what the bottleneck is in the display process, but I don't know what these values mean, and I've not found any documentation on them. Does someone have a good description of what this and the other statistics in the iPhone OpenGL ES instrument stand for? I know that the PowerVR MBX Lite in the iPhone 3G is a tile-based deferred renderer, but I'm not sure what the difference would be between the Renderer and Tiler in that architecture.
If it helps in any way, the (BSD-licensed) source code to this application is available if you want to download and test it yourself. In the current configuration, it starts a little benchmark every time you load a new molecular structure and outputs the triangles / s to the console.
The Tiler Utilization and Renderer Utilization percentages measure the duty cycle of the vertex and fragment processing hardware, respectively. On the MBX, Tiler Utilization typically scales with the amount of vertex data being sent to the GPU (in terms of both the number of vertices and the size of the attributes sent per-vertex), and Fragment Utilization generally increases with overdraw and texture sampling.
In your case, the best thing would be to reduce the size of each vertex you’re sending. For starters, I’d try binning your atoms and bonds by color, and sending each of these bins using a constant color instead of an array. I’d also suggest investigating if shorts are suitable for your positions and normals, given appropriate scaling. You might also have to bin by position in this case, if shorts scaled to provide sufficient precision aren’t covering the range you need. These sorts of techniques might require additional draw calls, but I suspect the improvement in vertex throughput will outweigh the extra per-draw call CPU overhead.
Note that it’s generally beneficial (on MBX and elsewhere) to ensure that each vertex attribute begins on a 32-bit boundary, which implies that you should pad your positions and normals out to 4 components if you switch them to shorts. The peculiarities of the MBX platform also make it such that you want to actually include the W component of the position in the call to glVertexPointer in this case.
You might also consider pursuing alternate lighting methods like DOT3 for your polygon data, particularly the spheres, but this requires special care to make sure that you aren’t making your rendering fragment-bound, or inadvertently sending more vertex data than before.
Great answer, #Pivot! For reference, this Apple doc defines these terms:
Renderer Utilization %. The percentage of time the GPU spent performing fragment processing.
Tiler Utilization %. The percentage of time the GPU spent performing vertex processing and tiling.
Device Utilization %. The percentage of time the GPU spent doing any tiling or rendering work.