I am working on an application that turns on a Zynq board.
I develop the C code that should runs on the ARM host and implement and synthesized the code for the PL. I have data transfer between the PL and the PS.
But, I don’t have the board. And I want to test my programs and to evaluate my system (resources, throughput and latency, …).
Is there any way to have this? Any simulator? How could I see the value of the data that transit between the PL and the PS.
I only have the vivado environment with sdk.
Thanks
I want to say that I didn't try it yet, but the answer can be in using AXI Bus Functional Model (BFM):
Xilinx provides AXI BFM to verify functionality of AXI masters and AXI slaves with AXI3, AXI4, AXI4-Lite, and AXI4-Stream interface
The AXI Bus Functional Models (BFMs), developed for Xilinx by Cadence Design Systems, support the simulation of customer-designed AXI-based IP. AXI BFMs support all versions of AXI (AXI3, AXI4, AXI4-Lite and AXI4-Stream). The BFMs are delivered as encrypted Verilog modules. BFM operation is controlled via a sequence of Verilog tasks contained in a Verilog-syntax text file. The API for the Verilog tasks is described in the AXI BFM User Guide.
The AXI BFM can be used to verify connectivity and basic functionality of AXI masters and AXI slaves with the custom RTL design flow. The AXI BFM provides example test benches and tests that demonstrate the abilities of AXI3, AXI4, AXI4-Lite and AXI4-Stream Master/Slave BFM pair. These examples can be used as a starting point to create tests for custom RTL design with AXI3, AXI4, AXI4-Lite and AXI4-Stream interface. The examples can be accessed from CORE Generator or standalone web download.
The AXI BFM can also be used for embedded designs using Xilinx Platform Studio (XPS). The AXI BFM is available as part of the CIP wizard to create an AXI-based IP with AXI BFM solution. The AXI BFM is also provided as separate pcores that can be accessed from the XPS IP catalog.
There are no evaluation licenses for AXI BFM IP.
Key Features & Benefits
Supports all protocol data widths and address widths, transfer types and responses
Transaction level protocol checking (burst type, length, size, lock type, cache type)
Behavioral Verilog Syntax
Verilog Task-based API
Delivered in ISE, enabled by a Xilinx-generated license
Verilog and VHDL example designs and test benches delivered standalone or through CORE Generator for RTL design
Integrated with XPS as a pcore or as an option with CIP wizard
Supported Simulators: Aldec Riviera-PRO, Cadence Incisive Enterprise Simulator, ISE Simulator, Mentor Graphics ModelSim and Synopsys VCS
I hope this helps.
Related
I'm really confused about the AUTOSAR Service software component type.
It is possible to create Atomic software component like application SWC using AUTOSAR Blockset tool, but what about Service software component?
In the AUTOSAR software template documentation it is mentioned that Service software component is configured on the ECU configuration phase.
My question is:
What kind of tools are used in ECU configuration phase?
The Simulink AUTOSAR Blockset allows you to develop ApplicationSwComponents and related higher level parts of SensorActuator-, EcuHwAbstraction-SwComponents and ComplexDrivers. These are mostly above the RTE, or abstracted from the actual HW and this correlates mostly to developing on the level of the "Virtual Function Bus".
The SW components are then later mapped to ECUs (SwcToEcuMapping) and are finally integrated into an ECU SW, including MCAL, HwAbstraction and Service components, which are actually configured with Tools like EB Tresos Studio, ETAS ISOLAR-A/B and Vector Davinci Configurator according to SystemDescription/SystemExtract/EcuExtract and the SW-Component-Descriptions.
Therefore, in Simulink AUTOSAR Blockset, there is no need to actually develop an ServiceComponent, since they are actually part of the AUTOSAR BSW below the RTE (e.g. BswM, StbM, Dem, Dcm, Com, LdCom).
So, I've been meaning to find an answer to this question for about 3 months now. I'm just a beginner to the world of FPGA and hardware programming in general. I've only built the NIOS and tried a few things using quartus and a DE10 standard FPGA (which I don't have access to anymore). So, all I know is that a bitstream or netlist is created to program an FPGA, which I can do from the programmer feature in quartus after the design is complete.
My problem here is, how does OpenVINO manage to program the FPGA while the code is written in python and may use several libraries. I've already ordered OpenVINO starter platform FPGA.. but I need to know how this works. I've only seen like 1 python to HDL synthesizer, which is MyHDL and it looked quite complicated.
OpenVINO FPGA depends on dlaplugin, Intel DLA architecture follows set of configuration methods, can be configured with Deep Learning Suite which is not provided as open source.
Intel FPGA runtime stack need to be installed to program an Intel FPGA.
aocl program device bitstream.aocx
after, dlaplugin helps to run application on FPGA
how can I use NS3 to simulate IoT, is there some model that should be added?
I'm studying RPL protocol security, and I found that the simulation could be done using NS3, but I don't know is there a model specific to IoT.
thank you
Currently, LR-WPAN (IEEE 802.15.4) module is available in ns3, which is one of the IoT technologies. You can use RPL in Tree or/and mesh typologies due to the RPL nature. Also, LoRaWAN module is available for ns3 (you can find on github), which is also IoT technology. However, LoRaWAN currently supports star of star topology. If you extend it to multi-hop, then you can use RPL as routing protocol in LoRaWAN.
LoRaWAN ns-3 extension can be used to simulate IoT Networks.
This is an ns-3 module that can be used to perform simulations of a LoRaWAN network. This module was developed by Davide Magrin and Martina Capuzzo of the Signet Lab at University of Padova’s Department of Information Engineering, under the supervision of Lorenzo Vangelista, Marco Centenaro, Andrea Zanella and Michele Zorzi.
The following is the NetAnim output of one such simulated mobile IoT scenario.
LoRaWAN Mobile IoT network Simulation:
Simulation and Analysis of IoT LoRaWAN Networks Under ns-3
From my research I cannot find what kernel type is being used in eCos, such as monolithic or micro-kernel. All I could find from my research is that the kernel is a real-time one or websites just describe it as the eCos kernel, does this mean it is a custom made kernel?
What I know about eCos is that it is a hard RTOS although is somewhat vulnerable in terms of security, uses priority, queue based scheduling.
A micro-kernel is:
... the near-minimum amount of software that can provide the mechanisms
needed to implement an operating system (OS). These mechanisms include
low-level address space management, thread management, and
inter-process communication (IPC).
(Wikipedia 11 Dec 2018)
The eCos kernel is described in its Reference Manual thus:
It provides the core functionality needed for developing
multi-threaded applications:
The ability to create new threads in the system, either during startup
or when the system is already running.
Control over the various threads in the system, for example
manipulating their priorities.
A choice of schedulers, determining which thread should currently be
running.
A range of synchronization primitives, allowing threads to interact
and share data safely.
Integration with the system's support for interrupts and exceptions.
It is quite clearly by comparison of these descriptions a micro-kernel. Other services provided by eCos such as file-systems, networking and device drivers are external and separable from the kernel. That is to say, you can deploy the kernel alone without such services and it remains viable.
In a monolithic kernel, these services are difficult or impossible to separate as they are an intrinsic part of the whole. Unlike eCos mand most other RTOS they do not scale well to small hardware platforms common in embedded systems. Monolithic kernels are suited to desktop and general purpose computing platforms, because the platforms themselves are monolithinc - a PC without a filesystem, display, keyboard etc, is not really viable, whereas in an embedded system that is not the case.
While Linux, and even Windows are used in embedded systems, a micro-kernel is deployable on platforms with a few tens of kilo-bytes of memory, whereas a minimal embedded Linux for example requires several mega-bytes and will include a great deal of code that your application may never use.
Ultimately the distinction is perhaps irrelevant, as is the terminology. It is what it is. You do not choose your kernel or OS on this criteria, but rather whether it provides the services you require, runs on your target, and fits in the available resource.
I think it is a monolithic kernel. If you review this page: http://ecos.sourceware.org/getstart.html
It is used instead of linux kernel and linux kernel support monolithic kernels. In addition, if it was a micro kernel , they would highlight the kernel type like QNX Kernel type which is micro kernel
I saw a question on Linux Kernel. While reading that I had this doubt.
The Windows NT branch of windows has a Hybrid Kernel. It's neither a monolithic kernel where all services run in kernel mode or a Micro kernel where everything runs in user space. This provides a balance between the protection gained from a microkernel and the performance that can be seen in a monolithis kernel (as there are fewer user/kernel mode context switches).
As an example, device drivers and the Hardware Abstraction layer run in kernel node but the Workstation service runs in user mode. The wikipedia article on Hybrid Kernels has a good overview.
The Windows Internals book gives an explanation for the hybrid approach
... The Carnegie Mellon University Mach
operating system, a contemporary
example of a microkernel architecture,
implements a minimal kernel that
comprises thread scheduling, message
passing, virtual memory, and device
drivers. Everything else, including
various APIs, file systems, and
networking, runs in user mode.
However, commercial implementations of
the Mach microkernel operating system
typically run at least all file system,
networking, and memory management
code in kernel mode. The reason is
simple: the pure microkernel design is
commercially impractical because it’s
too inefficient.
According to Wikipedia it's a Hybrid kernel. Which may or may not be just marketing speak for about the same as a monolithic one. The graphic on the latter page does make some things clearer, though.
Most importantly, almost no program on Windows uses the kernel API directly. And the complete Windows API subsystem resides in user space which is a rather large part of the OS as we see it. And in more recent versions Microsoft began to pull more and more device drivers from kernel space into user space (which is especially a good idea with certain drivers, such as for video cards which are probably as complex as an operating system on their own).
hyru hybrid kernel is the name of the Kernel that Windows systems after Windows 98, before that it was a GUI overlaid on DOS using a monolithic Kernel.