MIDI Outputs for VST Plugin - midi

I remember this vaguely from the Hypersonic 2 VST instrument.
Basically, it's a normal VST instrument, but if you had it in your project, you can assign its MIDI output (which was equal to the input, except if you had transposition or so active) to the input of another MIDI track, so basically it would just forward all the MIDI events to be used by another synth or whatever.
In Cubase, the output of the instrument was listed next to the physical MIDI inputs in the MIDI input popdown menu, but the "Use All MIDI Inputs" option in the menu did not include Hypersonic's output (it was separated by a menu separator).
I haven't found a way to do that, does anybody know? I guess it's one of those barely documented magic lines...

I don't use c++ but to make a VST plugin that has a MIDI output you need to override the AudioEffectX::canDo() function and return 1 to the sendVstMidiEvent and possibly sendVstEvents canDo's.

Related

Can I give "static" input on VSCode?

My college has a proprietary IDE for C programming. It is very simple: Half the screen for the code, and the other half has the input and output (as you can see here).
The input works in a way you only have to type it once, and the program will read each line on every scanf there is. I find this a convenient way of testing, instead of giving the input every time I compile the program, and I'm wondering if there's any way to do something similar on VSCode.

How to load symbol files to BCC profiler

With bcc tools' profile, I am getting mostly "[unknown]" in the profile output on my C program. This is, of course, expected because the program symbol are not loaded. However, I am not sure how to properly load the symbols so that "profile" program can pick it up. I have built my program with debug enabled "-g", but how do I load the debug symbols to "profile"?
Please see the DEBUGGING section in bcc profile's manpage:
See "[unknown]" frames with bogus addresses? This can happen for different
reasons. Your best approach is to get Linux perf to work first, and then to
try this tool. Eg, "perf record -F 49 -a -g -- sleep 1; perf script", and
to check for unknown frames there.
The most common reason for "[unknown]" frames is that the target software has
not been compiled
with frame pointers, and so we can't use that simple method for walking the
stack. The fix in that case is to use software that does have frame pointers,
eg, gcc -fno-omit-frame-pointer, or Java's -XX:+PreserveFramePointer.
Another reason for "[unknown]" frames is JIT compilers, which don't use a
traditional symbol table. The fix in that case is to populate a
/tmp/perf-PID.map file with the symbols, which this tool should read. How you
do this depends on the runtime (Java, Node.js).
If you seem to have unrelated samples in the output, check for other
sampling or tracing tools that may be running. The current version of this
tool can include their events if profiling happened concurrently. Those
samples may be filtered in a future version.
In your case, since it's a C program, I would recommend compiling with -fno-omit-frame-pointer.

Using .do files with ModelSim (10.3a)

Here is the (brief) context for my question :
I am working in VHDL (with Microsemi's Design Suite, Libero) and I use ModelSim to simulate my work. To that extent, I use a classic VDHL TestBench and, to save time, a .do Macro File.
This .do Macro file contains very basic commands such as "restart" or deleting/adding waves.
Even if I'm not expecting much from such a file, it would be convenient for me to include in it more actions, that I have to perform by hand with the Graphical Interface like, something that I use quite a lot : combining signals into a custom bus. This action is very simple to do in Modelsim's graphical interface but I can't find anywhere how to perform this in a .do Macro File.
So my question is :
Where can I find some good documentation regarding these ModelSim's .do Macro Files?
Or am I missing the point about the use of these files? Is it relevant to use it in sich a way?
I really hate to ask this kind of question here but, even if I was able to find some info here and there on various websites, I found nothing significant. I have been through quit a lot of ModelSim help documents or user guides but it almost always focused on the graphical interface.
You can find a command reference manual for your ModelSim version here:
www.microsemi.com/document-portal/doc_view/134097-modelsim-command-reference-manual-v10-3a.
You should also be able to find this and other documentation in ModelSim under "Help" > "PE Documentation - PDF Bookcase" (substitute 'PE' for the edition you are running).
You should see all the usual commands like 'add wave'. These can be used in .do files, and TCL script files.
You can use dividers to seperate signals with
add wave -divider -heigth 10 $DIVIDER_NAME
and also if you want to expand-collapse signals, you can add signal with
add wave -group $GROUP_NAME -position end ....
http://users.utcluj.ro/~baruch/resources/ModelSim/modelsim_user.pdf page 306

Difference between machine language, binary code and a binary file

I'm studying programming and in many sources I see the concepts: "machine language", "binary code" and "binary file". The distinction between these three is unclear to me, because according to my understanding machine language means the raw language that a computer can understand i.e. sequences of 0s and 1s.
Now if machine language is a sequence of 0s and 1s and binary code is also a sequence of 0s and 1s then does machine language = binary code?
What about binary file? What really is a binary file? To me the word "binary file" means a file, which consists of binary code. So for example, if my file was:
010010101010010
010010100110100
010101100111010
010101010101011
010101010100101
010101010010111
Would this be a binary file? If I google binary file and see Wikipedia I see this example picture of binary file which confuses me (it's not in binary?....)
Where is my confusion happening? Am I mixing file encoding here or what? If I were to ask one to SHOW me what is machine language, binary code and binary file, what would they be? =) I guess the distinction is too abstract to me.
Thnx for any help! =)
UPDATE:
In Python for example, there is one phrase in a file I/O tutorial, which I don't understand: Opens a file for reading only in binary format. What does reading a file in binary format mean?
Machine code and binary are the same - a number system with base 2 - either a 1 or 0. But machine code can also be expressed in hex-format (hexadecimal) - a number system with base 16. The binary system and hex are very interrelated with each other, its easy to convert from binary to hex and convert back from hex to binary. And because hex is much more readable and useful than binary - it's often used and shown. For instance in the picture above in your question -uses hex-numbers!
Let say you have the binary sequence 1001111000001010 - it can easily be converted to hex by grouping in blocks - each block consisting of four bits.
1001 1110 0000 1010 => 9 14 0 10 which in hex becomes: 9E0A.
One can agree that 9E0A is much more readable than the binary - and hex is what you see in the image.
I'm honestly surprised to not see the information I was looking for, looking back though, I guess the title of this thread isn't fully appropriate to the question the OP was asking.
You guys all say "Machine Code is a bunch of numbers".
Sure, the "CODE" is a bunch of numbers, but what people are wondering (I'm guessing) is "what actually is happening physically?"
I'm quite a novice when it comes to programming, but I understand enough to feel confident in 'roughly' answering this question.
Machine code, to the actual circuitry, isn't numbers or values.
Machine code is a bunch of voltage gates that are either open or closed, and depending on what they're connected to, a certain light will flicker at a certain time etc.
I'm guessing that the "machine code" dictates the pathway and timing for specific electrical signals that will travel to reach their overall destination.
So for 010101, 3 voltage gates are closed (The 0's), 3 are open (The 1's)
I know I'm close to the right answer here, but I also know it's much more sophisticated - because I can imagine that which I don't know.
010101 would be easy instructions for a simple circuit, but what I can't begin to fathom is how a complex computer processes all of the information.
So I guess let's break it down?
x-Bit-processors tell how many bits the processor can process at once.
A bit is either 1 or 0, "On" or "Off", "Open" or "Closed"
so 32-bit processors process "10101010 10101010 10101010 10101010" - this many bits at once.
A processor is an "integrated circuit", which is like a compact circuit board, containing resistors/capacitors/transistors and some memory. I'm not sure if processors have resistors but I know you'll usually find a ton of them located around the actual processor on the circuit board
Anyways, a transistor is a switch so if it receives a 1, it sends current in one direction, or if it receives a 0, it'll send current in a different direction... (or something like that)
So I imagine that as machine code goes... the segment of code the processor receives changes the voltage channels in such a way that it sends a signal to another part of the computer (why do you think processors have so many pins?), probably another integrated circuit more specialized to a specific task.
That integrated circuit then receives a chunk of code, let's say 2 to 4 bits 01 or 1100 or something, which further defines where the final destination of the signal will end up, which might be straight back to the processor, or possibly to some output device.
Machine code is a very efficient way of taking a circuit and connecting it to a lightbulb, and then taking that lightbulb out of the circuit and switching the circuit over to a different lightbulb
Memory in a computer is highly necessary because otherwise to get your computer to do anything, you would need to type out everything (in machine code). Instead, all of the 1's and 0's are stored inside some storage device, either a spinning hard disk with a magnetic head pin that 'reads' 1's or 0's based on the charge of the disk, or a flash memory device that uses a series of transistors, where sending a voltage through elicits 1's and 0's (I'm not fully aware how flash memory works)
Fortunately, someone took the time to think up a different base number system for programming (hex), and a way to compile those numbers (translate them) back into binary. And then all software programs have branched out from there.
Each key on the keyboard creates a specific signal in binary that translates to
a bunch of switches being turned on or off using certain voltages, so that a current could be run through the specific individual pixels on your screen that create "1" or "0" or "F", or all the characters of this post.
So I wonder, how does a program 'program', or 'make' the computer 'do' something... Rather, how does a compiler compile a program of a code different from binary?
It's hard to think about now because I'm extremely tired (so I won't try) but also because EVERYTHING you do on a computer is because of some program.
There are actively running programs (processes) in task manager. These keep your computer screen looking the way you've become accustomed, and also allow for the screen to be manipulated as if to say the pictures on the screen were real-life objects. (They aren't, they're just pictures, even your mouse cursor)
(Ok I'm done. enough editing and elongating my thoughts, it's time for bed)
Also, what I don't really get is how 0's are 'read' by the computer.
It seems that a '0' must not be a 'lack of voltage', rather, it must be some other type of signal
Where perhaps something like 1 volt = 1, and 0.5 volts = 0. Some distinguishable difference between currents in a circuit that would still send a signal, but could be the difference between opening and closing a specific circuit.
If I'm close to right about any of this, serious props to the computer engineers of the world, the level of sophistication is mouthwatering. I hope to know everything about technology someday. For now I'm just trying to get through arduino.
Lastly... something I've wondered about... would it even be possible to program today's computers without the use of another computer?
Machine language is a low-level programming language that generally consists entirely of numbers. Because they are just numbers, they can be viewed in binary, octal, decimal, hexadecimal, or any other way. Dave4723 gave a more thorough explanation in his answer.
Binary code isn't a very well-defined technical term, but it could mean any information represented by a sequence of 1s and 0s, or it could mean code in a machine language, or it could mean something else depending on context.
Technically, all files are stored in binary, we just don't usually look at the binary when we view a file. However, the term binary file is usually used to refer to any non-text file; e.g. an .exe, a .png, etc.
You have to understand how a computer works in its basic principles and this will clear things up for you... Therefore I recommend on reading into stuff like Neumann Architecture
Basically in a very simple computer you only have one memory like an array
which has instructions for your processor, the data and everything is a binary numbers.
Your program starts at a certain place in your memory and reads the first number...
so here comes the twist: these numbers can be instructions or data.
Your processor reads these numbers and interprets them as instructions
Example: the start address is 0
in 0 is a instruction like "read value from address 120 into the ALU (Math-Unit)
then it steps to address 1
"read value from address 121 into ALU"
then it steps to address 2
"subtract numbers in ALU"
then it steps to address 3
"if ALU-Value is smaller than zero go to address 10"
it is not smaller than zero so it steps to address 4
"go to address 20"
you see that this is a basic if(a < b)
You can write these instructions as numbers and they can be run by your processor but because nobody wants to do this work (that was what they did with punchcards in the 60s)
assembler was invented...
that looks like:
add 10 ,11, 20 // load var from address 10 and 11; run addition and store into address 20
In Conclusion:
Assembler (processor instructions) can be called binary because it's stored in plain numbers
But everything else can be a Binary file, too.
In reality if you have a simple .exe file it is both... If you have variables in there like a = 10 and b = 20, these values can be stored some where between if clauses and for loops... It depends on the compiler where it put these
But if you have a complex 3D-model it can be stored in a separate file with no executable code in it...
I hope it helps to clear things up a little.

pcap - Proper capitalization when referring to the file standard?

How does one properly refer to a Packet Capture file in short hand when writing about it for documentation?
I see a mix between PCAP, PCap and pcap in various areas and wikis.
The proper way to refer to a packet capture file is "a packet capture file"; "pcap"/"PCap"/"pcap" are often used to refer to a particular type of packet capture file, those packet capture files written in the format that libpcap/WinPcap supports for writing. There are several other capture file formats, one of which Wireshark, and libpcap 1.1.0 and later, can read (pcap-ng), and several of which Wireshark can read (and some that Wireshark can't read).
The way I (as a core developer of libpcap, tcpdump, and Wireshark) would say is the proper way to refer to files in the aforementioned format is "pcap", with no extra capitalization; the "pcap" comes from "libpcap", not directly from "packet capture", and "libpcap" is not capitalized (it's a UN*X library, and those tend to have all-lower-case names, given that almost all UN*X file systems are case-sensitive).
Others may call it "PCAP", perhaps because a number of terms in the computer and networking fields are acronyms or other initialisms and they assume "PCAP" must be as well, or call it "PCap", because they think of it as standing for "Packet Capture" rather than referring to libpcap and WinPcap, but, then again, people also referred to Sun Microsystems as "SUN" (it did come from the Stanford University Network project, but it wasn't "Stanford University Network Microsystems", it was just "Sun Microsystems").