How SVUnit has been used? - system-verilog

I'm looking for reasons to use SVUnit in my projects. As a software engineer I used to write tests before the production code. However, I don't see so much adoption of this initiative. Why? Is it worth it?

Since you mention you used to be a software engineer, I guess you don't need to be lectured on why unit testing is good or why TDD makes sense. Since testbenches are very much akin to software, this means that a lot of software development best practices can be carried over to testbench development, unit testing being one of them.
The fact that it hasn't seen much adoption yet shouldn't be a reason not to use something.
I've written about SVUnit here: http://blog.verificationgentleman.com/2014/05/a-quick-look-at-svunit.html

Any programming language can be interfaced to SystemVerilog by Direct Programming Interface.
By referring quote from Sectioin 35.2 of IEEE 1800 - 2012 it is possible but adopting to it to the environment is based on the programmers understanding of the calls and routines of the language.
DPI is an interface between SystemVerilog and a foreign programming
language. It consists of two separate layers: the SystemVerilog layer
and a foreign language layer. Both sides of DPI are fully isolated.
Which programming language is actually used as the foreign language is
transparent and irrelevant for the SystemVerilog side of this
interface.

Related

Scalability in Scala

I am going through Scala book by Martin Odersky.
It states that Scala language is highly scalable,reason being that it allows users to add new features which can be utilised as native language support.
It has got me confused with the term 'Scalability'.
I understand that scalability means ability of a software to handle huge amount of data.
So what's the difference here?
In the context of Scala, Odersky usually means that it is scalable in the sense that it can be used for a wide range of tasks, from simple scripting to large libraries to behemoth enterprise applications.
It's good for scripting because of its type inference, relatively low verbosity (compared to Java), and functional style (which generally lends itself to more concise code).
It's good for medium size applications and libraries because of its powerful type system, which means it is possible to write code that mostly or only produces errors at compile time rather than runtime (to the extent that is possible). The Play! framework in particular is founded on this philosophy. Furthermore, Scala runs on the JVM and therefore can harness any of the many, many Java libraries out there.
And it's good for enterprise software because it compiles to JVM bytecode, which already has a great track record in enterprise software; further, the fact that it's statically typed makes the maintenance of very large codebases much easier.
Scala is also applicable to a number of other areas, making it even more "scalable": concurrency/parallelism and domain-specific languages come to mind.
Here is a presentation by Odersky, if you start at slide 6 and go forward, you'll see him explain some other uses of Scala as well.

Assembly-generating macros in common lisp

I'm interested in seeing how low-level a programmer can go in pure Common Lisp (or, failing that, in implementation-specific extensions). Google hasn't been able to find me much information about this, so I'd like to hear what the experts have to say. This post mentioned a feature of SBCL to define what the author called "virtual operators", but searching for "common lisp virtual operators" didn't yield much. The author also mentioned how hard it was to find documentation about it. Do similar systems exist for other implementations, is there any basis for it in the standard (though considering that such a feature would mostly be used for writing ISA-specific code, portability shouldn't really be high on the priorities for users of it), and where can documentation for features such as this be found?
It would be great to find a way to extend the "programmable programming language" concept to low-level code as well (especially for areas where efficiency is highly important and other libraries written in C or assembly may not be available).
This is very implementation specific and not portable at all.
Yes, SBCL uses VOPs, but other implementations have completely different compilation targets.
If you intend to go low-level in a somewhat portable way, you have two avenues:
Optimize using declarations, inlining, manual unrolling etc., checking your progress with disassemble
Implement a static blob externally, then interface it with an FFI

Is GPU and SIMD likely to be implemented in .NET / Java VMs?

For some time now, mainstream compute hardware has sported SIMD instructions (MMX, SSE, 3D-Now, etc) and more recently we're seeing AMD bringing 480-stream GPUs into the same die as the CPU.
Functional languages like F#, Scala and Clojure are also gaining traction, with one common attraction being how much easier concurrent programming is in these languages.
Are there any plans for the Java VM or .NET CLR to start providing access to parallel compute hardware resources, so that functional languages can mature to leverage the hardware?
It seems as though the VMs are currently the bottleneck against high performance computing, with SIMD and GPU access being delegated the 3rd party libraries and post-compilers (tidepowered.net, OpenTK, ScalaCL, Brahma, etc, etc.)
Does anyone know of any plans / roadmaps on the part of Microsoft / Oracle / Open-Source Community to being their VMs up-to-date with the new hardware and programming paradigms?
Is there a good reason why vendors are being so sluggish on the uptake?
Edit:
To Address feedback so far, it's true that GPU programming is complex and, done wrong, worsens performance. But it's well known that parallelism is the future of computing - so the crux of this question is that it doesn't help for hardware and programming languages to embrace a parallel paradigm if the runtimes sitting between the applications and the hardware don't support it... why aren't we seeing this on the VM vendor's radars / roadmaps?
you means JavaCL and ScalaCL? they both try to migrate CUDA/GPU programming to javavm
The mono runtime includes support for some SIMD instructions already - see http://docs.go-mono.com/index.aspx?link=N%3aMono.Simd
For Microsoft's implementation of the CLR you can use XNA which allows you to run shaders etc. or the accelerator library https://research.microsoft.com/en-us/projects/accelerator/ which provides an interface to running GPGPU calculations
Java has been making strong headway in the parallelism arena for some time, first with the java.util.concurrent package and now with the fork/join framework. Hopefully, in the future, languages like Clojure and Scala will provide great high level abstractions to leverage fork-join.
GPGPU programming offers significant performance gains only for very specialized problems. .Net and Java are general purpose programming languages. Plus, who wants to do CUDA-style programming in a language like Java?
Zach Tellman's Penumbra framework enables GPU programming in Clojure (both for graphics and general purpose programming).
It's somewhat experimental but I think the theoretical motivation is very sound:
Harness the GPU / specialised SIMD instructions for the serious number crunching on large data sets
Use a very high level langauge that is strong at metaprogramming / DSL definition (e.g. Clojure) to orchestrate the operations at the overall level and generate the appropriate lower-level code where needed (e.g. with generous use of macro expansions)

The state of programming and compiling for multicore systems

I'm doing some research on multicore processors; specifically I'm looking at writing code for multicore processors and also compiling code for multicore processors.
I'm curious about the major problems in this field that would currently prevent a widespread adoption of programming techniques and practices to fully leverage the power of multicore architectures.
I am aware of the following efforts (some of these don't seem directly related to multicore architectures, but seem to have more to do with parallel-programming models, multi-threading, and concurrency):
Erlang (I know that Erlang includes constructs to facilitate concurrency, but I am not sure how exactly it is being leveraged for multicore architectures)
OpenMP (seems mostly related to multiprocessing and leveraging the power of clusters)
Unified Parallel C
Cilk
Intel Threading Blocks (this seems to be directly related to multicore systems; makes sense as it comes from Intel. In addition to defining certain programming-constructs, it also seems have features that tell the compiler to optimize the code for multicore architectures)
In general, from what little experience I have with multithreaded programming, I know that programming with concurrency and parallelism in mind is definitely a difficult concept. I am also aware that multithreaded programming and multicore programming are two different things. in multithreaded programming you are ensuring that the CPU does not remain idle (on a single-CPU system. As James pointed out the OS can schedule different threads to run on different cores -- but I'm more interested in describing the parallel operations from the language itself, or via the compiler). As far as I know you cannot truly do parallel operations. In multicore systems, you should be able to perform truly-parallel operations.
So it seems to me that currently the problems facing multicore programming are:
Multicore programming is a difficult concept that requires significant skill
There are no native constructs in today's programming languages that provide a good abstraction to program for a multicore environment
Other than Intel's TBB library I haven't found efforts in other programming-languages to leverage the power of multicore architectures for compilation (for example, I don't know if the Java or C# compiler optimizes the bytecode for multicore systems or even if the JIT compiler does that)
I'm interested in knowing what other problems there might be, and if there are any solutions in the works to address these problems. Links to research papers (and things of that nature) would be helpful. Thanks!
EDIT
If I had to condense my question down to one sentence, it would be this: What are the problems that face multicore programming today and what research is going on in the field to solve these problems?
UPDATE
It also seems to me that there are three levels where multicore needs to be concerned:
Language level: Constructs/concepts/frameworks that abstract parallelization and concurrency and make it easy for programmers to express the same
Compiler level: If the compiler is aware of what architecture it is compiling for, it can optimize the compiled code for that architecture.
OS level: The OS optimizes the running process and perhaps schedules different threads/processes to run on different cores.
I've searched on ACM and IEEE and have found a few papers. Most of them talk about how difficult it is to think concurrently and also how current languages don't have a proper way to express concurrency. Some have gone so far as to claim that the current model of concurrency that we have (threads) is not a good way to handle concurrency (even on multiple cores). I'm interested in hearing other views.
I'm curious about the major problems in this field that would currently prevent a widespread adoption of programming techniques and practices to fully leverage the power of multicore architectures.
Inertia. (BTW: that's pretty much the answer to all "what does prevent the widespread adoption" questions, whether that be models of parallel programming, garbage collection, type safety or fuel-efficient automobiles.)
We have known since the 1960s that the threads+locks model is fundamentally broken. By ~1980, we had about a dozen better models. And yet, the vast majority of languages that are in use today (including languages that were newly created from scratch long after 1980), offer only threads+locks.
The major problems with multicore programming is the same as writing any other concurrent applications, but whereas before it was uncommon to have multiple cpus in a computer, now it is hard to find any modern computer with only one core in it, so, to take advantage of multicore, multiple cpu architectures there are new challenges.
But, this problem is an old problem, whenever computer architectures go beyond compilers then it seems the fallback solution is to move back toward functional programming, as that programming paradigm, if strictly followed, can make very parallelizable programs, as you don't have any global mutable variables, for example.
But, not all problems can be done easily using FP, so the goal then is how to easily get other programming paradigms to be easy to use on multicores.
The first thing is that many programmers have avoided writing good mulithreaded applications, so there isn't a strongly prepared number of developers, as they learned habits that will make their coding harder to do.
But, as with most changes to the cpu, you can look at how to change the compiler, and for that you can look at Scala, Haskell, Erlang and F#.
For libraries you can look at the parallel framework extension, by MS as a way to make it easier to do concurrent programming.
It is at work, but I recently either IEEE Spectrum or IEEE Computer had articles on multicore programming issues, so look at what IEEE and ACM articles have been written on these issues, to get more ideas as to what is being looked at.
I think the biggest impediment will be the difficulty to get programmers to change their language as FP is very different than OOP.
One place for research besides developing languages that will work well this way, is how to handle multiple threads accessing memory, but, as with much in this area, Haskell seems to be at the forefront in testing ideas for this, so you can look at what is going on with Haskell.
Ultimately there will be new languages, and it may be that we have DSLs to help abstract the developer more, but how to educate programmers on this will be a challenge.
UPDATE:
You may find Chapter 24. Concurrent and multicore programming of interest, http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html
One of the answers mentioned the Parallel Extensions for the .NET Framework and since you mentioned C#, it's definitely something I would investigate. Microsoft has done something interesting things there, though I have to think many of their efforts seem more suited for language enhancements in C# than a separate and distinct library for concurrent programming. But I think their efforts are worth applauding and respect that we're early here. (Disclaimer: I used to be the marketing director for Visual Studio about 3 years ago)
The Intel Thread Building Blocks are also quite interesting (Intel recently released a new version, and I'm excited to head down to Intel Developer Forum next week to learn more about how to use it properly).
Lastly, I work for Corensic, a software quality startup in Seattle. We've got a tool called Jinx that is designed to detect concurrency errors in your code. A 30-day trial edition is available for Windows and Linux, so you might want to check it out. (www.corensic.com)
In a nutshell, Jinx is a very thin hypervisor that, when activated, slips in between the processor and operating system. Jinx then intelligently takes slices of execution and runs simulations of various thread timings to look for bugs. When we find a particular thread timing that will cause a bug to happen, we make that timing "reality" on your machine (e.g., if you're using Visual Studio, the debugger will stop at that point). We then point out the area in your code where the bug was caused. There are no false positives with Jinx. When it detects a bug, it's definitely a bug.
Jinx works on Linux and Windows, and in both native and managed code. It is language and application platform agnostic and can work with all your existing tools.
If you check it out, please send us feedback on what works and doesn't work. We've been running Jinx on some big open source projects and already are seeing situations where Jinx can find bugs 50-100 times faster than simply stress testing code.
The bottleneck of any high-performance application (written in C or C++) designed to make efficient use of more than one processor/core is the memory system (caches and RAM). A single core usually saturates the memory system with its reads and writes so it is easy to see why adding extra cores and threads causes an application to run slower. If a queue of people can pass through a door one a time, adding extra queues will not only clog the door but also make the passage of any one individual through the door less efficient.
The key to any multi-core application is optimization of and economizing on memory accesses. This means structuring data and code to work as much as possible inside their own caches where they don't disturb the other cores with acceses to the common cache (L3) or RAM. Once in a while a core needs to venture there but the trick is to reduce those situations as much as possible. In particular, data needs to be structured around and adapted to cache lines and their sizes (currently 64 bytes) and code needs to be compact and not call and jump all over the place which also disrupts pipelines.
My experience is that efficient solutions are unique to the application in question. The generic guidelines (above) are a basis on which to construct code but the tweak changes resulting from profiling conclusions will not be obvious to those who were not themselves involved in the optimizing work.
Look up fork/join frameworks and work-stealing runtimes. Two names for the same, or at least related, approaches, which is to recursively subdivide large tasks into lightweight units, such that all available parallelism is exploited, without having to know in advance how much parallelism there is. The idea is that it should run at serial speed on a uniprocessor, but get a linear speedup with multiple cores.
Sort of a horizontal analogue of cache-oblivious algorithms if you look at it right.
But i'd say the main problem facing multicore programming is that the great majority of computations remain stubbornly serial. There's just no way to throw multiple cores at those computations and make them stick.

Why does Apple use Objective-C?

Why did Apple decide to use Objective-C for the iPhone SDK and not C++?
It seems strange to me that they would not have chosen a language more popular than Objective-C. Is it because wanted to have something unique in their application which is not otherwise in general use?
Apple merged with NeXT in the '90s and Mac OS X was made from NeXT's operating system, NeXTSTEP. Objective-C was the official language of NeXTSTEP's application frameworks, which became Mac OS X's Cocoa. Mac OS X was then adapted into the iPhone OS, and Cocoa was made into Cocoa Touch. Objective-C has held up pretty well all along the way, and a lot of Cocoa's features would be difficult to translate into C++.
So essentially, it all comes from NeXT.
Objective C began life in 1983 I believe, created by Brad Cox and Tom Love. The idea of Objective-C was to take the purity and low-level control of C and merge that with true object-oriented features that would allow companies to customize system libraries that could communicate with the OOP layer of Obj-C. Essentially, it worked. Obj-C is a strict superset of C, unlike C++ which is most of C, but with many differences.
When Steve Jobs founded NeXT Computer (1985), he brought in some of his former Apple team and others. His best programmers were interested in using a language that expanded on C with the same speed benefits and system control. They chose Objective-C. NeXT eventually wrote many libraries and methods for the base language. These all begin with NS for Next Step. This was the name of the NeXT OS. By 1989 the Next Step OS was considered to be vastly superior to MS Windows or Mac OS, and many computer companies wanted to license it badly. Jobs simply didn't want to go in that direction.
Once Apple wised up and brought Steve Jobs back into the fold (1996), the infusion of Next Step OS into the new Mac OS X was really the key to Apple reviving its software and its programming strategy.
While C++ remains a truly excellent and powerful language, I find that Objective C has less flaws (just my opinion), and Apple's continued work on Cocoa libraries has made the Obj-C language a truly modern power with C underpinnings. Is it better than Java? Not sure. But for what it is primarily designed for (Mac OS, iOS) it is astonishingly good, if a bit overly verbose.
The greatest criticism of Obj-C is the syntactical styling, but any programmer that truly learns the language will quickly learn of its amazing power and seemless fit with all things Mac, iPhone, iPad.
Will any other platforms ultimately adopt Obj-C? not sure, but doubtful. But the Cocoa libraries are truly wonderful.
It's because Objective C has been the de facto language for Mac OS X development before it was Mac OS X. When Jobs left Apple to set up NeXT, the language Objective C was developed as a specific language that wasn't C++ and avoided many of its pitfalls. It therefore makes sense that any portable or consumer equipment (including Apple TV) use Objective C as their primary development language, and dropping down to the underlying C layer when needed for performance or interface issues.
Objective-C adds object oriented programming to C. It was used for NeXT, upon which a lot of OSX is derived. It supports all of C, and is simpler than C++.
http://discussions.apple.com/thread.jspa?threadID=2091191
Note that Objective-C is not a new language. It's been around since 1986 - well before Java or C#!
It has been in general use ever since NeXT, many real-world applications are around that make use of it.
Apple answered this very question here in 2010:
The Objective-C language was chosen for a variety of reasons. First
and foremost, it’s an object-oriented language. The kind of
functionality that’s packaged in the Cocoa frameworks can only be
delivered through object-oriented techniques. Second, because
Objective-C is an extension of standard ANSI C, existing C programs
can be adapted to use the software frameworks without losing any of
the work that went into their original development. Because
Objective-C incorporates C, you get all the benefits of C when working
within Objective-C. You can choose when to do something in an
object-oriented way (define a new class, for example) and when to
stick to procedural programming techniques (define a structure and
some functions instead of a class).
Moreover, Objective-C is a fundamentally simple language. Its syntax
is small, unambiguous, and easy to learn. Object-oriented programming,
with its self-conscious terminology and emphasis on abstract design,
often presents a steep learning curve to new recruits. A
well-organized language like Objective-C can make becoming a
proficient object-oriented programmer that much less difficult.
Compared to other object-oriented languages based on C, Objective-C is
very dynamic. The compiler preserves a great deal of information about
the objects themselves for use at runtime. Decisions that otherwise
might be made at compile time can be postponed until the program is
running. Dynamism gives Objective-C programs unusual flexibility and
power. For example, it yields two big benefits that are hard to get
with other nominally object-oriented languages:
Objective-C supports an open style of dynamic binding, a style that
can accommodate a simple architecture for interactive user interfaces.
Messages are not necessarily constrained by either the class of the
receiver or even the method name, so a software framework can allow
for user choices at runtime and permit developers freedom of
expression in their design. (Terminology such as dynamic binding,
message, class, and receiver are explained in due course in this
document.) Dynamism enables the construction of sophisticated
development tools. An interface to the runtime system provides access
to information about running applications, so it’s possible to develop
tools that monitor, intervene, and reveal the underlying structure and
activity of Objective-C applications.
Historical note: As a language, Objective-C has a long history. It was
created at the Stepstone company in the early 1980s by Brad Cox and
Tom Love. It was licensed by NeXT Computer Inc. in the late 1980s to
develop the NeXTStep frameworks that preceded Cocoa. NeXT extended the
language in several ways, for example, with the addition of protocols.