Multiply two 16 bit numbers without using arithmetic instructions - emu8086

I am to multiply two 16-bit numbers without using MUL in emu 8086 microprocessor
Below is how I had done it
MOV AX, 000H
MOV DX, AX
MOV AL, [4000H]
MOV CL, [4002H]
ADD AL, CL
MOV [4003H], AL
DAA
MOV AL, [4004H]
MOV CL, [4005H]
ADC AL, CL
MOV [4006H], AL
DAA
MOV AL,[000H]
ADC AL, AL
MOV [4007H],AL
HLT
But I think the answer is not correct

I'm not fluent in Assembly but an algorithm using only bitwise operations and addition could look something like this
multiply (unsigned int a, unsigned int b) {
if (a == 0 || b == 0) return 0;
int c = 0;
while (a != 0) {
if (a & 1) {
c += b;
}
a = a >> 1;
b = b << 1;
}
return c;
}

Related

Nibble shuffling with x64 SIMD

I'm aware of byte shuffling instructions, but I'd like to do the same with nibbles (4-bit values), concretely I'd like to shuffle 16 nibbles in a 64-bit word. My shuffling indices are also stored as 16 nibbles. What's the most efficient implementation of this?
Arbitrary shuffles with a control vector that has to be stored this way? Ugh, hard to work with. I guess you'd have to unpack both to feed SSSE3 pshufb and then re-pack that result.
Probably just punpcklbw against a right-shifted copy, then AND mask to keep only the low 4 bits in each byte. Then pshufb.
Sometimes an odd/even split is easier than widening each element (so bits just stay within their original byte or word). In this case, if we could change your nibble index numbering, punpcklqdq could put the odd or even nibbles in the high half, ready to bring them back down and OR.
But without doing that, re-packing is a separate problem. I guess combine adjacent pairs of bytes into a word in the low byte, perhaps with pmaddubsw if throughput is more important than latency. Then you can packuswd (against zero or itself) or pshufb (with a constant control vector).
If you were doing multiple such shuffles, you could pack two vectors down to one, to store with movhps / movq. Using AVX2, it might be possible to have all the other instructions working on two independent shuffles in the two 128-bit lanes.
// UNTESTED, requires only SSSE3
#include <stdint.h>
#include <immintrin.h>
uint64_t shuffle_nibbles(uint64_t data, uint64_t control)
{
__m128i vd = _mm_cvtsi64_si128(data); // movq
__m128i vd_hi = _mm_srli_epi32(vd, 4); // x86 doesn't have a SIMD byte shift
vd = _mm_unpacklo_epi8(vd, vd_hi); // every nibble at the bottom of a byte, with high garbage
vd = _mm_and_si128(vd, _mm_set1_epi8(0x0f)); // clear high garbage for later merging
__m128i vc = _mm_cvtsi64_si128(control);
__m128i vc_hi = _mm_srli_epi32(vc, 4);
vc = _mm_unpacklo_epi8(vc, vc_hi);
vc = _mm_and_si128(vc, _mm_set1_epi8(0x0f)); // make sure high bit is clear, else pshufb zeros that element.
// AVX-512VBMI vpermb doesn't have that problem, if you have it available
vd = _mm_shuffle_epi8(vd, vc);
// left-hand input is the unsigned one, right hand is treated as signed bytes.
vd = _mm_maddubs_epi16(vd, _mm_set1_epi16(0x1001)); // hi nibbles << 4 (*= 0x10), lo nibbles *= 1.
// vd has nibbles merged into bytes, but interleaved with zero bytes
vd = _mm_packus_epi16(vd, vd); // duplicate vd into low & high halves.
// Pack against _mm_setzero_si128() if you're not just going to movq into memory or a GPR and you want the high half of the vector to be zero.
return _mm_cvtsi128_si64(vd);
}
Masking the data with 0x0f ahead of the shuffle (instead of after) allows more ILP on CPUs with two shuffle units. At least if they already had the uint64_t values in vector registers, or if the data and control values are coming from memory so both can be loaded in the same cycle. If coming from GPRs, 1/clock throughput for vmovq xmm, reg means there's a resource conflict between the dep chains so they can't both start in the same cycle. But since we the data might be ready before the control, masking early keeps it off the critical path for control->output latency.
If latency is a bottleneck instead of the usual throughput, consider replacing pmaddubsw with right-shift by 4, por, and AND/pack. Or pshufb to pack while ignoring garbage in odd bytes. Since you'd need another constant anyway, might as well make it a pshufb constant instead of and.
If you had AVX-512, a shift and bit-blend with vpternlogd could avoid needing to mask the data before shuffling, and vpermb instead of vpshufb would avoid needing to mask the control, so you'd avoid the set1_epi8(0x0f) constant entirely.
clang's shuffle optimizer didn't spot anything, just compiling it as-written like GCC does (https://godbolt.org/z/xz7TTbM1d), even with -march=sapphirerapids. Not spotting that it could use vpermb instead of vpand / vpshufb.
shuffle_nibbles(unsigned long, unsigned long):
vmovq xmm0, rdi
vpsrld xmm1, xmm0, 4
vpunpcklbw xmm0, xmm0, xmm1 # xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1],xmm0[2],xmm1[2],xmm0[3],xmm1[3],xmm0[4],xmm1[4],xmm0[5],xmm1[5],xmm0[6],xmm1[6],xmm0[7],xmm1[7]
vmovq xmm1, rsi
vpsrld xmm2, xmm1, 4
vpunpcklbw xmm1, xmm1, xmm2 # xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1],xmm1[2],xmm2[2],xmm1[3],xmm2[3],xmm1[4],xmm2[4],xmm1[5],xmm2[5],xmm1[6],xmm2[6],xmm1[7],xmm2[7]
vmovdqa xmm2, xmmword ptr [rip + .LCPI0_0] # xmm2 = [15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15]
vpand xmm0, xmm0, xmm2
vpand xmm1, xmm1, xmm2
vpshufb xmm0, xmm0, xmm1
vpmaddubsw xmm0, xmm0, xmmword ptr [rip + .LCPI0_1]
vpackuswb xmm0, xmm0, xmm0
vmovq rax, xmm0
ret
(Without AVX, it requires 2 extra movdqa register-copy instructions.)
I came across this problem today. In AVX-512 you can use vpmultishiftqb (1), an amusing instruction available in Ice Lake and after (and apparently in Zen 4, according to Wikipedia), to shuffle nibbles much more quickly. Its power lies in its ability to permute bytes in an unaligned fashion: It takes the eight 8-bit chunks in each 64-bit element and selects unaligned 8-bit chunks from the corresponding element. Below is an implementation.
#include <immintrin.h>
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>
// Convention: (a & (0xf << (4 * i))) >> (4 * i) is the ith nibble of a
// (i.e., lowest-significant is 0)
uint64_t shuffle_nibbles(uint64_t data, uint64_t indices) {
#if defined(__AVX512VBMI__) && defined(__AVX512VL__)
// If your data is already in vectors, then this method also works in parallel
const __m128i lo_nibble_msk = _mm_set1_epi8(0x0f);
__m128i v_data = _mm_cvtsi64_si128(data);
__m128i v_indices = _mm_cvtsi64_si128(indices);
__m128i indices_lo = _mm_and_si128(lo_nibble_msk, v_indices);
__m128i indices_hi = _mm_andnot_si128(lo_nibble_msk, v_indices);
indices_lo = _mm_slli_epi32(indices_lo, 2);
indices_hi = _mm_srli_epi32(indices_hi, 2);
// Look up unaligned bytes
__m128i shuffled_hi = _mm_multishift_epi64_epi8(indices_hi, v_data);
__m128i shuffled_lo = _mm_multishift_epi64_epi8(indices_lo, v_data);
shuffled_hi = _mm_slli_epi32(shuffled_hi, 4);
// msk ? lo : hi
__m128i shuffled = _mm_ternarylogic_epi32(lo_nibble_msk, shuffled_lo, shuffled_hi, 202);
return _mm_cvtsi128_si64(shuffled);
#else
// Fallback scalar implementation (preferably Peter Cordes's SSE solution--this is as an example)
uint64_t result = 0;
for (int i = 0; i < 16; ++i) {
indices = (indices >> 60) + (indices << 4);
int idx = indices & 0xf;
result <<= 4;
result |= (data >> (4 * idx)) & 0xf;
}
return result;
#endif
}
int main() {
// 0xaa025411fe034102
uint64_t r1 = shuffle_nibbles(0xfedcba9876543210, 0xaa025411fe034102);
// 0x55fdabee01fcbefd
uint64_t r2 = shuffle_nibbles(0x0123456789abcdef, 0xaa025411fe034102);
// 0xaaaa00002222aaaa
uint64_t r3 = shuffle_nibbles(0xaa025411fe034102, 0xeeee11110000ffff);
printf("0x%" PRIx64 "\n", r1);
printf("0x%" PRIx64 "\n", r2);
printf("0x%" PRIx64 "\n", r3);
}
Clang yields (2):
.LCPI0_0:
.zero 16,60
shuffle_nibbles(unsigned long, unsigned long):
vmovq xmm0, rdi
vmovq xmm1, rsi
vpslld xmm2, xmm1, 2
vpsrld xmm1, xmm1, 2
vmovdqa xmm3, xmmword ptr [rip + .LCPI0_0] # xmm3 = [60,60,60,60,60,60,60,60,60,60,60,60,60,60,60,60]
vpand xmm1, xmm1, xmm3
vpmultishiftqb xmm1, xmm1, xmm0
vpand xmm2, xmm2, xmm3
vpmultishiftqb xmm0, xmm2, xmm0
vpslld xmm1, xmm1, 4
vpternlogd xmm1, xmm0, dword ptr [rip + .LCPI0_1]{1to4}, 216
vmovq rax, xmm1
In my case, I am shuffling nibbles in 64-bit-element vectors; this method also avoids the need for widening. If your shuffle(s) is/are constant and you stay in vectors, this method reduces to a measly four instructions: 2x vpmultishiftqb, 1x vpslld, and 1x vpternlogd. Counting µops suggests a latency of 5 and throughput of one every 2 cycles, bottlenecked on shuffle µops, for 128- and 256-bit vectors; and a throughput of 3 for 512-bit vectors, due to reduced execution units for the latter two instructions.

How to emulate *really simple* variable bit shifts with SSE?

I have two variable bit-shifting code fragments that I want to SSE-vectorize by some means:
1) a = 1 << b (where b = 0..7 exactly), i.e. 0/1/2/3/4/5/6/7 -> 1/2/4/8/16/32/64/128/256
2) a = 1 << (8 * b) (where b = 0..7 exactly), i.e. 0/1/2/3/4/5/6/7 -> 1/0x100/0x10000/etc
OK, I know that AMD's XOP VPSHLQ would do this, as would AVX2's VPSHLQ. But my challenge here is whether this can be achieved on 'normal' (i.e. up to SSE4.2) SSE.
So, is there some funky SSE-family opcode sequence that will achieve the effect of either of these code fragments? These only need yield the listed output values for the specific input values (0-7).
Update: here's my attempt at 1), based on Peter Cordes' suggestion of using the floating point exponent to do simple variable bitshifting:
#include <stdint.h>
typedef union
{
int32_t i;
float f;
} uSpec;
void do_pow2(uint64_t *in_array, uint64_t *out_array, int num_loops)
{
uSpec u;
for (int i=0; i<num_loops; i++)
{
int32_t x = *(int32_t *)&in_array[i];
u.i = (127 + x) << 23;
int32_t r = (int32_t) u.f;
out_array[i] = r;
}
}

CRC-32 algorithm from HDL to software

I implemented a Galois Linear-Feedback Shift-Regiser in Verilog (and also in MATLAB, mainly to emulate the HDL design). It's been working great, and as of know I use MATLAB to calculate CRC-32 fields, and then include them in my HDL simulations to verify a data packet has arrived correctly (padding data with CRC-32), which produces good results.
The thing is I want to be able to calculate the CRC-32 I've implemented in software, because I'll be using a Raspberry Pi to input data through GPIO in my FPGA, and I haven't been able to do so. I've tried this online calculator, using the same parameters, but never get to yield the same result.
This is the MATLAB code I use to calculate my CRC-32:
N = 74*16;
data = [round(rand(1,N)) zeros(1,32)];
lfsr = ones(1,32);
next_lfsr = zeros(1,32);
for i = 1:length(data)
next_lfsr(1) = lfsr(2);
next_lfsr(2) = lfsr(3);
next_lfsr(3) = lfsr(4);
next_lfsr(4) = lfsr(5);
next_lfsr(5) = lfsr(6);
next_lfsr(6) = xor(lfsr(7),lfsr(1));
next_lfsr(7) = lfsr(8);
next_lfsr(8) = lfsr(9);
next_lfsr(9) = xor(lfsr(10),lfsr(1));
next_lfsr(10) = xor(lfsr(11),lfsr(1));
next_lfsr(11) = lfsr(12);
next_lfsr(12) = lfsr(13);
next_lfsr(13) = lfsr(14);
next_lfsr(14) = lfsr(15);
next_lfsr(15) = lfsr(16);
next_lfsr(16) = xor(lfsr(17), lfsr(1));
next_lfsr(17) = lfsr(18);
next_lfsr(18) = lfsr(19);
next_lfsr(19) = lfsr(20);
next_lfsr(20) = xor(lfsr(21),lfsr(1));
next_lfsr(21) = xor(lfsr(22),lfsr(1));
next_lfsr(22) = xor(lfsr(23),lfsr(1));
next_lfsr(23) = lfsr(24);
next_lfsr(24) = xor(lfsr(25), lfsr(1));
next_lfsr(25) = xor(lfsr(26), lfsr(1));
next_lfsr(26) = lfsr(27);
next_lfsr(27) = xor(lfsr(28), lfsr(1));
next_lfsr(28) = xor(lfsr(29), lfsr(1));
next_lfsr(29) = lfsr(30);
next_lfsr(30) = xor(lfsr(31), lfsr(1));
next_lfsr(31) = xor(lfsr(32), lfsr(1));
next_lfsr(32) = xor(data2(i), lfsr(1));
lfsr = next_lfsr;
end
crc32 = lfsr;
See I use a 32-zeroes padding to calculate the CRC-32 in the first place (whatever's left in the LFSR at the end is my CRC-32, and if I do the same replacing the zeroes with this CRC-32, my LFSR becomes empty at the end too, which means the verification passed).
The polynomial I'm using is the standard for CRC-32: 04C11DB7. See also that the order seems to be reversed, but that's just because it's mirrored to have the input in the MSB. The results of using this representation and a mirrored one are the same when the input is the same, only the result will be also mirrored.
Any ideas would be of great help.
Thanks in advance
Your CRC is not a CRC. The last 32 bits fed in don't actually participate in the calculation, other than being exclusive-or'ed into the result. That is, if you replace the last 32 bits of data with zeros, do your calculation, and then exclusive-or the last 32 bits of data with the resulting "crc32", then you will get the same result.
So you will never get it to match another CRC calculation, since it isn't a CRC.
This code in C replicates your function, where the data bits come from the series of n bytes at p, least significant bit first, and the result is a 32-bit value:
unsigned long notacrc(void const *p, unsigned n) {
unsigned char const *dat = p;
unsigned long reg = 0xffffffff;
while (n) {
for (unsigned k = 0; k < 8; k++)
reg = reg & 1 ? (reg >> 1) ^ 0xedb88320 : reg >> 1;
reg ^= (unsigned long)*dat++ << 24;
n--;
}
return reg;
}
You can immediately see that the last byte of data is simply exclusive-or'ed with the final register value. Less obvious is that the last four bytes are just exclusive-or'ed. This exactly equivalent version makes that evident:
unsigned long notacrc_xor(void const *p, unsigned n) {
unsigned char const *dat = p;
// initial register values
unsigned long const init[] = {
0xffffffff, 0x2dfd1072, 0xbe26ed00, 0x00be26ed, 0xdebb20e3};
unsigned xor = n > 3 ? 4 : n; // number of bytes merely xor'ed
unsigned long reg = init[xor];
while (n > xor) {
reg ^= *dat++;
for (unsigned k = 0; k < 8; k++)
reg = reg & 1 ? (reg >> 1) ^ 0xedb88320 : reg >> 1;
n--;
}
switch (n) {
case 4:
reg ^= *dat++;
case 3:
reg ^= (unsigned long)*dat++ << 8;
case 2:
reg ^= (unsigned long)*dat++ << 16;
case 1:
reg ^= (unsigned long)*dat++ << 24;
}
return reg;
}
There you can see that the last four bytes of the message, or all of the message if it is three or fewer bytes, is exclusive-or'ed with the final register value at the end.
An actual CRC must use all of the input data bits in determining when to exclusive-or the polynomial with the register. The inner part of that last function is what a CRC implementation looks like (though more efficient versions make use of pre-computed tables to process a byte or more at a time). Here is a function that computes an actual CRC:
unsigned long crc32_jam(void const *p, unsigned n) {
unsigned char const *dat = p;
unsigned long reg = 0xffffffff;
while (n) {
reg ^= *dat++;
for (unsigned k = 0; k < 8; k++)
reg = reg & 1 ? (reg >> 1) ^ 0xedb88320 : reg >> 1;
n--;
}
return reg;
}
That one is called crc32_jam because it implements a particular CRC called "JAMCRC". That CRC is the closest to what you attempted to implement.
If you want to use a real CRC, you will need to update your Verilog implementation.

Sending float from PIC to Raspberry Pi

im trying to send float value from pic 16f877a (easypic4 development board) to raspberry pi via uart.
mikroc code
AValue = 4.88 * ADC_Read(2)
ptr = (insigned char *)&AValue;
for (i=0; i < sizeof(AValue);
UART1_Write(*(ptr+1)), i++);
UART1_Write(0x0a);
delay_ms(100)
import serial, time, struct
from pprint import pprint
ser = serial.Serial("/dev/ttyAMA0", 9600)
ser.write(raw_input("enter char: "))
while True:
count = 0
AValue = []
for ch in ser.read():
if ch == "\n":
AValue = []
time.sleep(0.1)
while count < 4:
for ch in ser.read():
AValue.append(ch)
count += 1
flt =struct.unpack("<f",str("".join(AValue)))
pprint (flt)enter code here
output in python shell on raspberry pi. the value is changing as i move the pot around as you can see but only the zero value is correct. actualy not even zero since first value should be 1*4.88
(0.0,)
(-1.1472824864025526e-35),
(-3.2123910193243324e-35),
(-4.405564851100735e-35),
(-3.0045950051756514e-32),
Be careful, Microchip use custom float format.
Here you can see both translations of 4997,12 number
0F6h, 028h, 09Ch, 045h ; IEEE Real = 4997,12
0F6h, 028h, 01Ch, 08Bh ; Microchip Real = 4997,12
For more information about read: http://ww1.microchip.com/downloads/en/AppNotes/00575.pdf
EDIT: added x86 asm conversion rutines...
procedure AnsiSingleToMFormat(var Data: single);
asm
mov cx,[eax + 2]
rcl cl,1
rcl ch,1
rcr cl,1
mov [eax + 2],cx
end;
procedure MFormatToAnsiSingle(var Data: single);
asm
mov cx,[eax + 2]
rcl cl,1
rcr ch,1
rcr cl,1
mov [eax + 2],cx
end;

How to perform right shift on RISC

I'd like to know how I can perform a right shift on a Reduced Instruction Set Computer that does not offer this operation on it's own.
A left shift can be simply done by adding a register to itself but how about a right shift?
The RISC offers only:
ADD
NOT
NXOR (XOR)
AND (NAND)
so OR and NOR can all be emulated by several (N)AND and NOT operations.
The C program below uses only authorized instructions plus conditional jumps, and it shifts input into output by 1.
If the instruction you are trying to emulate is “shift by n”, then you should start with c equal to 2n.
unsigned int shift_right(unsigned int input) {
unsigned int d = 1;
unsigned int output = 0;
for (unsigned int c = 2; c <= 0x80000000; c += c)
{
if (c & input)
output |= d;
d += d;
}
return output;
}