How to converting 8085 code to z80 assembly - microprocessors

I have 8085 assembly code for dividing 2 data 8 bit
:
MVI C,FFH
LXI H,1900H
MOV A,M (A=08)
INX H
MOV B,M (B=3)
REPEAT: INR C
SUB B
JNC REPEAT
ADD B
INX H
MOV M,C
INX H
MOV M,A
HLT

If you don't use the special opcodes RIM and SIM that only the 8085 has, the resulting machine code will run in almost all cases on the Z80 without changes. This is the case with your program.
However, if your task is to translate the mnemonics, just do a search-and-replace session. Start with the first one, MVI, and change it to LD. And so on.
You will need to change operands like M to (HL), too, because that is the syntax of the Z80 assembler.
Anyway, you need both instruction sets to do this.

Related

Find content of C register after execution in microprocessor

Find content of 'C' register after execution of following assembly program
MVI A, 17H
LOOP: RLC
JNC LOOP
MOV C,A
HLT
Here my doubt is when at the end we did mov c,a then output should be the value of A that is in c rather than the previous result which we got after rlc instruction. But this didn't happen ..why so?

Decrease the computation time for DGELSD subroutine

I am writing a code in Fortran that involve computation of linear least squares (A^(-1)*b). For this I have used the subroutine "DGELSD". I am able to get the correct answer from my code. I have cross checked my solution with that of the data available from MATLAB.
Now when it comes to the computation time, MATLAB is taking much less time than that of my .f90 code. The main motivation for me to write the .f90 code was to do heavy computations, as MATLAB was unable to handle this problem (as the matrix size is increased, it takes more and more time). I am talking of the order of matrix around (10^6 x 10^6).
I know that I may be lacking somewhere around the vectorization or parallelization of code (as I am new to it). But would it make any difference? As the subroutine "DGELSD" is already highly optimized. Also I am using Intel ifort compiler and visual studio as an editor.
I have attached the part of the main code (.f90) below for reference. Please suggest what can be done to decrease the computation time. I am having good hardware configuration to run heavy computation.
Workstation specification: Intel Xeon 32 core - 64 bit processor, 64 GB RAM.
Code information:
a) I have used 'DGELSD' example available on Intel MKL example site..
b) I am using x64 architecture.
c) This is the code I am using in MATLAB for comparison.
function min_norm_check(m,n)
tic
a=15*m*n;
b=15*m*n;
c=6;
A=rand(a,b);
B=rand(b,c);
C=A\B;
toc
end
This is the Fotran code given below:
! Program to check the time required to
! calculate linear least square solution
! using DGELSD subroutine
PROGRAM MAIN
IMPLICIT NONE
INTEGER :: M,N,LDA,LDB,NRHS,NB,NG
REAL :: T1,T2,START,FINISH
DOUBLE PRECISION, DIMENSION (:,:), ALLOCATABLE :: D_CAP,A,B,TG,DG
INTEGER :: I=0
NB=10
NG=10
M = 15*NB*NG
N = 15*NB*NG
NRHS = 6
!!
LDA = MAX(M,N)
LDB = MAX(M,NRHS)
!
ALLOCATE (A(LDA,N))
ALLOCATE (B(LDB,NRHS))
A = 0
B = 0
CALL RANDOM_NUMBER(A)
CALL RANDOM_NUMBER(B)
CALL CPU_TIME(START)
DO I=1,1
WRITE(*,*) I
CALL CALC_MIN_NORM_D(M,N,LDA,LDB,NRHS,A,B,D_CAP)
ENDDO
CALL CPU_TIME(FINISH)
WRITE(*,*)'("TIME =',FINISH-START,'SECONDS.")'
END
SUBROUTINE CALC_MIN_NORM_D(M,N,LDA,LDB,NRHS,A,B,D_CAP)
!
! SUBROUTINE DEFINITION TO CALCULATE THE
! LINEAR LEAST SQUARE SOLUTION OF ([A]^-1*B)
!
IMPLICIT NONE
INTEGER :: M,N,NRHS,LDA,LDB,LWMAX,INFO,LWORK,RANK
DOUBLE PRECISION RCOND
INTEGER, ALLOCATABLE, DIMENSION(:) :: IWORK
DOUBLE PRECISION :: A(LDA,N),B(LDB,NRHS),D_CAP(LDB,NRHS)
DOUBLE PRECISION, ALLOCATABLE, DIMENSION(:) :: S,WORK
!
WRITE(*,*)'IN CALC MIN NORM BEGINING'
WRITE(*,*)'DGELSD EXAMPLE PROGRAM RESULTS'
LWMAX = 1E8
ALLOCATE(S(M))
ALLOCATE(IWORK(3*M*0+11*M))
ALLOCATE(WORK(LWMAX))
! NEGATIVE RCOND MEANS USING DEFAULT (MACHINE PRECISION) VALUE
RCOND = -1.0
!
! QUERY THE OPTIMAL WORKSPACE.
!
LWORK = -1
CALL DGELSD( M, N, NRHS, A, LDA, B, LDB, S, RCOND, RANK, WORK,LWORK, IWORK, INFO )
LWORK = MIN( LWMAX, INT( WORK( 1 ) ) )
!WRITE(*,*) 'AFTER FIRST DGELSD'
!
! SOLVE THE EQUATIONS A*X = B.
!!!
CALL DGELSD( M, N, NRHS, A, LDA, B, LDB, S, RCOND, RANK, WORK,LWORK, IWORK, INFO )
!
! CHECK FOR CONVERGENCE.
!
IF( INFO.GT.0 ) THEN
WRITE(*,*)'THE ALGORITHM COMPUTING SVD FAILED TO CONVERGE;'
WRITE(*,*)'THE LEAST SQUARES SOLUTION COULD NOT BE COMPUTED.'
STOP
END IF
!
!
WRITE(*,*)' EFFECTIVE RANK = ', RANK
!D_CAP = B
END
This is the build log for successful compilation of the file. As I am using Visual studio with intel visual fortran, I can compile my program using Compile option available in the editor. I mean to say I don't have to use command line interface to run my program.
Compiling with Intel(R) Visual Fortran Compiler 17.0.2.187 [IA-32]...
ifort /nologo /debug:full /Od /warn:interfaces /module:&quotDebug\\&quot /object:&quotDebug\\&quot /Fd&quotDebug\vc110.pdb&quot /traceback /check:bounds /check:stack /libs:dll /threads /dbglibs /c /Qlocation,link,&quotC:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\\bin&quot /Qm32 &quotD:\Google Drive\Friday_May_27_2016\Mtech Thesis Job\All new work\Fortran coding\fortran hfgmc\Console6\Console6\Console6.f90&quot
Linking...
Link /OUT:&quotDebug\Console6.exe&quot /INCREMENTAL:NO /NOLOGO /MANIFEST /MANIFESTFILE:&quotDebug\Console6.exe.intermediate.manifest&quot /MANIFESTUAC:&quotlevel='asInvoker' uiAccess='false'&quot /DEBUG /PDB:&quotD:\Google Drive\Friday_May_27_2016\Mtech Thesis Job\All new work\Fortran coding\Console6\Console6\Debug\Console6.pdb&quot /SUBSYSTEM:CONSOLE /IMPLIB:&quotD:\Google Drive\Friday_May_27_2016\Mtech Thesis Job\All new work\Fortran coding\fortran hfgmc\Console6\Console6\Debug\Console6.lib&quot -qm32 &quotDebug\Console6.obj&quot
Embedding manifest...
mt.exe /nologo /outputresource:&quotD:\Google Drive\Friday_May_27_2016\Mtech Thesis Job\All new work\Fortran coding\fortran hfgmc\Console6\Console6\Debug\Console6.exe;#1&quot /manifest &quotDebug\Console6.exe.intermediate.manifest&quot
Console6 - 0 error(s), 0 warning(s)
Also I have included the intel mkl library, shown in the figure below:
I have compiled the program and the output has been written in file 'OUTPUT_FILE.TXT'.
IN CALC MIN NORM BEGINING
DGELSD EXAMPLE PROGRAM RESULTS
EFFECTIVE RANK = 1500
IN CALC MIN NORM ENDING
("TIME TAKEN TO SOLVE DGELSD= 4.290028 SECONDS.")
On the other hand MATLAB gives the following result in the output command window:
min_norm_check(10,10)
Elapsed time is 0.224172 seconds.
Also I don't want to outperform MATLAB, its easy and simple. But with increase in the size of the problem, MATLAB stops responding. I have left my program to run on MATLAB for more than two days. It still haven't produced any results.

Data reading in Fortran

EDIT: I am Making this just a question about the Fortran and will start a new question about converting to MATLAB.
ORIGINAL:
I am working on a project and am trying to port some old Fortran code into Matlab. I have almost no Fortran experience so I am not quite sure what is going on in the following code. The Fortran code is simply for interpreting data from a binary file and I have made some decent progress at porting stuff into MATLAB but have got stuck at the following part:
IMPLICIT NONE
DOUBLE PRECISION SCALE
CHARACTER*72 BLINE
CHARACTER*72 DATA_FILE_FULL_NAME
CHARACTER*6 DATA_FILE_NAME
CHARACTER*4 CH4, CH4F
REAL*4 RL4
EQUIVALENCE (RL4,CH4)
CHARACTER*2 C2
LOGICAL LFLAG
c2='69'
LFLAG=.TRUE.
DATA_FILE_FULL_NAME='./'//DATA_FILE_NAME//'.DAT'
OPEN(UNIT=20, FILE=DATA_FILE_FULL_NAME, ACCESS='DIRECT',
. RECL=72, status='OLD')
READ(20,REC=1) BLINE
CH4f=BLINE(7:10)
call flip(4,lflag,ch4f,ch4)
SCALE=RL4
RETURN
END
c ..................................................
subroutine flip(n,lflag,ch1,ch2)
c ..................................................
integer*4 n, i
character*(*) ch1, ch2
logical lflag
if(lflag) then
do i=1,n
ch2(i:i)=ch1(n-i+1:n-i+1)
enddo
else
ch2=ch1
endif
return
end
What I am stuck with is getting the correct value for SCALE because I am not exactly sure what is happening with the RL4. Can someone explain to me why RL4 changes value and what process changes it so that I can put this process into MATLAB?
The code probably does change of Endianness - the byte swap. The equivalence means that the equivalenced variables share the memory and you can treat the 4 bytes read from the file as a number after the byte swap.
The four bytes are therefore interpreted as a four byte real RL4 and converted to a double precision value scale after that.
The byte swap is done only when the logical lflag is true. That will probably be when the byte order of the file is not the same as the machine native byte order.
In Matlab you can probably use http://www.mathworks.com/help/matlab/ref/swapbytes.html .
Just read the 4 byte floating point number and swap the bytes if necessary.

How to connect matlab scripts

I have the following problem:
I have written six scripts matlab. In the first script, the user must enter the variable n_strati (a number between 1 and 5). The first script allows you to make choices about computing models based on variables known or unknown , and runs them to n_strato=1. The second third fourth and fifth script follow the same procedure , respectively, for the layers 2-3-4-5, but in which the input parameters (not intended as a value) are different. For example:
for Strato1 performs calculations knowing the input variables A B E (and not C D F) , for Strato2 performs calculations knowing A C E (and not B D F) , for Strato3 knowing the variables B D F (and not A C E).
The sixth takes all the variables of the previous scripts and processes them to obtain the final result.
The first five scripts save the data using the command:
save Strato1 alpha beta gamma
% etc.
and the sixth script them "stores" with the command:
load Strato1
load Strato2
% etc.
But I have to make sure that:
if n_strati==1
I enter the data and choose the models in script1 jumping scripts 2-3-4-5 and proceeding with the final calculation through the script 6 .
if n_strati==2
I enter the data and choose the models for Strato1 in script1 and Strato2 in script2 jumping scripts 3-4-5 and proceeding with the final calculation through the script 6 .
and so on.
I wanted to know: how can I do?
Thank you for your cooperation.
The best way would be avoiding scripts and using functions. Even if you succeed in your plot of using multiple scripts, the code will be a big mess, hard to debug, etc. So the answer is simple:
Say NO to scripts!
It is as easy as adding a signature and declaring your input and output.

Data interpretation between Matlab and Fortran

I have some Fortran code that interprets a binary file. Thanks to another question I asked I understand how the fortran code works but I need help changing it to MATLAB code. Here is the Fortran code:
IMPLICIT NONE
DOUBLE PRECISION SCALE
CHARACTER*72 BLINE
CHARACTER*72 DATA_FILE_FULL_NAME
CHARACTER*6 DATA_FILE_NAME
CHARACTER*4 CH4, CH4F
REAL*4 RL4
EQUIVALENCE (RL4,CH4)
CHARACTER*2 C2
LOGICAL LFLAG
INTEGER*2 I2
if(i2.eq.13881) LFLAG=.FALSE.
c2='69'
LFLAG=.TRUE.
DATA_FILE_FULL_NAME='./'//DATA_FILE_NAME//'.DAT'
OPEN(UNIT=20, FILE=DATA_FILE_FULL_NAME, ACCESS='DIRECT',
. RECL=72, status='OLD')
READ(20,REC=1) BLINE
CH4f=BLINE(7:10)
call flip(4,lflag,ch4f,ch4)
SCALE=RL4
RETURN
END
c ..................................................
subroutine flip(n,lflag,ch1,ch2)
c ..................................................
integer*4 n, i
character*(*) ch1, ch2
logical lflag
if(lflag) then
do i=1,n
ch2(i:i)=ch1(n-i+1:n-i+1)
enddo
else
ch2=ch1
endif
return
end
So basically (and I believe I understand this correctly) the Fortran code is checking the endianess and flipping it if the machine and the data don't match. Additionally, the data is stored in memory in a place reserved for both CH4 and RL4 so by calling RL4 after defining CH4 the data is simply being cast to the REAL*4 type instead of the CHARACTER*4 type. Now I need to figure out how to port this into Matlab. I already have the capability to read in the raw data, but when I try various forms of interpreting it I always get the wrong answer. So far I have tried:
fid=fopen(LMK_file,'r');
BLINE=fread(fid, 72,'char=>char');
CH4=BLINE(7:10);
SCALE=single(CH4(1)+CH4(2)+CH4(3)+CH4(4));
SCALE=int8(CH4(1)+256*int8(CH4(2))+256^2*int8(CH4(3))+256^3*int8(CH4(4));
in MATLAB but both give me the same wrong answer (which makes sense since its doing pretty much the same thing).
Any suggestions?
For anyone else who is looking for the answer to this question this is what I came up with thanks to the help in the comments.
fid=fopen(LMK_file,'r'); %open the file for reading
IGNORE=fread(fid,6,'*char'); %ignore the first 6 bytes. This could also be performed using fseek
SCALE=fread(fid,1,'*float32','b'); %reading in the scale value as a floating point 32 bit number.
where the b indicates the Big endianess of the file. By using the '*float32' option, fread automatically interprets the next 4 characters as a 32 bit floating point number.
This returns the correct value of scale from the given file.