Does Nvidia rtx 3060 work in Matlab version 2022b for Convolutional Neural Network? - matlab

I have plan to buy Nvidia rtx 3060 12 Gb. Does Nvidia rtx 3060 support in matlab 2022b to train CNN, please?
Thank you.

The NVIDIA RTX 3060 is part of the Ampere architecture, and looks like it is supported in R2021a and newer.
From Does MATLAB support NVIDIA Ampere cards for GPU computation?:
For releases R2010b to R2020b support for Ampere will be via NVIDIA's
forwards compatibility mode. Optimized device libraries must be
compiled at runtime from an unoptimized version. Support can be
limited and you might see errors and unexpected behaviour.
Forward
compatibility from CUDA version 10.0–10.2 (MATLAB versions R2019a,
R2019b, R2020a and R2020b) to Ampere (compute capability 8.x) has only
limited functionality
Full Ampere built-in binary support within
MATLAB is available from R2021a.
For more details please see GPU
Support by Release.

Related

Enabling Intel SGX in BIOS

I want to test Intel SGX technology on my Lenovo Tower S510 10L3-000JFM. I checked via https://github.com/ayeks/SGX-hardware that my CPU Intel Core i7-6700 supports SGX but BIOS does not, or may be not enabled (in BIOS). A BIOS update can fix this. However, a recent BIOS update from Lenovo in https://pcsupport.lenovo.com/us/en/products/desktops-and-all-in-ones/lenovo-s-series-all-in-ones/s510-desktop/10kw/downloads/ds112505 does not specify that explicitly as I do not want to proceed to this risky operation without being sure.
My question is: is this BIOS update supporting Intel SGX? Or not?
Any help or resources are welcomed.
Last BIOS update is on 01/09/2016 and last CPU microcode update is on 07/01/2016.
According to a Lenovo BIOS engineer, BIOS for this computer model does not support Intel SGX and there is no plan for the future.
The Linux kernel does not transparently handle the Intel SGX. An application has to be written specifically for Intel SGX to use it.
If you just want to write code for Intel SGX, you can use the SIMULATION mode provided in the SGX SDK to write code and test it out. You won't be able to use Remote Attestation (and Local attestation) as it requires access to the hardware. Apart from that, everything should work fine.

Dymola 2018 performance on Linux (xubuntu)

The issue that I experience is that when running simulations (same IBPSA/AixLib-based models) on Linux I get a significant performance drop (simulation time is about doubled) in comparison to a Windows 8 machine. Below you find the individual specs of the two machines. In both cases I use Cvode solver with equal settings. Compilation is done with VC14.0 (Win) or GCC (Xubuntu).
Is this issue familiar to someone or can anyone help what the reason might be?
Win 8:
Intel Xeon #2.9GHz (6 logic processors)
32 GB RAM
64-Bit
Xubuntu 16.04 VM:
Intel Xeon #3.7GHz (24 logic processors)
64 GB RAM
64-Bit
Thanks!
In addition to the checklist in the comments, also consider enabling hardware virtualization support if not already done.
In general gcc tends to produce slower code than Visual Studio. In order to turn on optimization one could try adding the following line:
CFLAGS=$CFLAGS" -02"
at the top of insert/dsbuild.sh.
The reason for not having it turned on by default is to avoid lenghty compilations and bloated binaries. For industrial sized models these are actual issues.

Enable AMD-virtualization

Before 3 weeks maybe, i faced a problem in launching WP emulator. After troubleshooting, i found that visualization option in my Laptop is not running successfully.
Laptop spec. (Acer 4253):
CPU: AMD E-350, Zacate 40nm Technology
OS: Operating System, Windows 10 Pro 64-bit
RAM: 4.00GB DDR3 # 532MHz
I have downloaded Speccy to check visualization info, since nothing relate to visualization is appear in bios settings, and i found that "Hyper-threading" is not supported!, any help?
Hyperthreading is only an Intel technology, AMD doesn't have hperthreading on any of it's processors evem the AMD FX generation.
Hyper-threading (officially called Hyper-Threading Technology or HT Technology, and abbreviated as HTT or HT) is Intel's proprietary simultaneous multithreading (SMT) implementation used to improve parallelization of computations.
For Virtualizaton, you have written it ok in the title, but you wrote visualization wrong every time... processors dont have visualization.:)
Everything on your WP configuration looks ok, you shouldn't worry what the amd parameters are because they are just fine... you have to just configure and run the program same as for any amd processor, which is probably the same with intel, programs have almost zero configuration differences between the two.

DirectX won't let me use D3D_DRIVER_TYPE_HARDWARE despite having a DirectX11 capable Graphics adapter

I'd like to use SharpDX for the rendering Engine of a WPF project (Windows 7, 64 bit, DirectX10 / 11) but i'm running into problems getting the samples to work. I can use the DirectX 9 samples though. The problem is likely not directly related to SharpDX since i'm also seing similar problems with SlimDX and the other DirectX samples.
The only DirectX driver types that work when using Direct3D 10 / 11 are D3D_DRIVER_TYPE_REFERENCE and D3D_DRIVER_TYPE_WARP. D3D_DRIVER_TYPE_HARDWARE does not seem to be working. This does not only affect SharpDX, C++ samples also let me only choose between RFERENCE and WARP drivers.
My understanding is that those drivertypes are merely software rasterizers. Which implies that my directx installation is not working properly. But i don't see any errors, the latest drivers are installed and the system is reporting the DirectX 11 drivers and the Graphics adapter supports DirectX 11.
dxdiag is not reporting any errors
dxdiag confirms that DirectX 11 drivers are installed
I'm using a GT 460, which should support DirectX 11
System is Windows 7 (64bit) / VS2013
SharpDX is able to create the devices but as soon as a VertexShader is created or a SwapChain is set up i get DXGI_NOT_SUPPORTED or E_NOINTERFACE errors.
I already swapped Graphics cards from a Radeon HD 5450 to a Nvidia Geforce GT 460 to no avail. The sharpdx samples run fine on another computer with Windows 7 and Intel on board graphics. Can anyone give my an idea what is going on here? Why can i only use Warp and Reference drivers despite having a DirectX11 capable Graphics adapter?
Any help is greatly appreciated. The only references i found online to such an issue was someone running the IDE inside a VM that did not have proper 3d drivers.

What do x86_64, i386, ia64 and other such jargons stand for?

I frequently encounter these terms and am confused about them. Are they specific to the Processor, or the Operating System, or both?
I have Ubuntu 12.04 running on Intel i7 machine. So which one of them would apply for my case?
They are processor instruction set names:
i386 is the name of the 32-bit instruction set first implemented by Intel in the 386 processor. It became dominant thanks to dirt-cheap PC hardware.
x86-64 is the name of the AMD extension added to i386 to make it capable of executing 64-bit code. This is the one you have. It is highly compatible with i386 and will execute a 32-bit program as fast as an i386 processor.
ia64 is the name of the instruction set used in Itanium processors. The other 64-bit architecture that nobody uses anymore.
Those are cpu instruction sets. Apps installers are compiled to some subset of them. Here most difference is between 32bit(i386) and 64bits(x86_64 and ia64). You can not run app for 64bit on 32bit cpu but in reverse usually yes.
x86_64 (AMD64) cpu is most common instruction set as comes to 64bit cpu on desktop computer. It is from AMD which was few years earlier with their cpu which worked fine with x86(32-bits) instructions also.
ia64 (itanium) is from intel. Itanium works fast only with 64bits and is still used in industry.
Intel now uses x86_64 instructions from AMD due to its popularity in industry.
Sometimes key "amd" at installer package name is present and it is what you need for 64bit intel cpu.
i386 is quite old (Pentium times, pentium III has i686). To determine 32bits architecture(on desktop computers) also is used term x86 (aliases: IA-32, x86-32). There are also other architectures 32/64bit like ARM from smartphones.
Other cpu instruction sets can make compression, video coding/decoding, virtualization, random generators, security etc. to be faster and better. Windows 8 require PAE, NX, SSE2 (some of those are not present in ARM cpus so you have other version of Windows 8RT for them).
In hardware, x86_64 is a type of processor that can run both 32bit and 64bit applications just fine where ia64 runs 32bit applications SLOWER than any other CPU, as it is meant for 64bit only applications.
Moving on to the software side. I'm not sure about Ubuntu, but generally a 64bit Windows OS will allow you to use more than 3.3GBs of memory as well as the advantage of using your 64bit hardware to address memory better and have bigger than 2GB processes running. Usually on a 32bit, once an application reaches the 2GB limit, you'll get a OutOfMemory error from your application.
For a full article, refer to: http://en.wikipedia.org/wiki/64-bit_computing