Hallo this is my first post here.
I'm currently doing my internship at a company that uses twincat 3 programs for automation in conjunction with a yaskawa robot system. The assignment I received is to expand the simulation software they currently use to simulate the plc software to also be able to simulate the yaskawa interface so they can test if changes in eithers software causes issue's in the other without having to upload the program to a PLC. I've been looking online and found a couple options but I'm curious if I missed some other options or if the options I've found will not work.
The options I'm currently considering are.
using the ethernet port on the simulation pc to connect to the Yaskawa controller. I've found a driver(https://infosys.beckhoff.com/english.php?content=../content/1033/el7031-0030/1036996875.html&id=) that will allow the user to make an ethernet port on their pc function in real time so I theorized It might be possible to then directly connect the computer to the yaskawa controller so the user would not need to upload the software but have the PC act as a PLC instead.
Use ROS to simulate the robot. there are libraries for ROS industrial to program and simulate a yaskawa robot. It would require using 2 computers or using a virtual machine as ROS is used best on an linux system. Only problem is I can't figure out if the ROS simulation can use regular yaskawa jobs and functions or if it can only the adapted ones programmed in ROS.
Use motomans simulation software motosim to simulate the yaskawa robot. Seems like an obvious solution but can't figure out if it's possible to have it communicate with twincat3 however I might have missed something.
write a twincat 3 software that emulates a yaskawa interface. I consider this a last resort if all else fails. It would require the code be updated each time something about the yaskawa software changes which is not ideal for quick testing.
I'd really appreciate any suggestions or additional information more experienced programmers might have.
Is it possible to access external hardware without using a driver, i.e. not having the driver abstraction layer in between program and external device?
Can you use a device by implementing your own driver-like controlling/handling directly in your program code?
I'm trying to understand a program that implements a Modbus protocol and some very specific Modbus configurations. Now I don't know how exactly it communicates with the Modbus devices.
It looks to me that this is very similar to what a driver does.
But can it even communicate DIRECTLY with the device without having a driver installed?
Yes, there are several micro-kernel OS's that always configure this way -- drivers are entirely implemented outside of the kernel.
The first thing you likely need is to get access to the device's registers; typically performed with mmap(), you may need to dig around a bit to find the right settings for cacheability, etc...
Second problem is interrupts. Unless you are running something like QNX, you won't have a way to have interrupts signal your program directly. You will probably have to turn them off and poll the device periodically.
If you are using linux and need io ports (inb, outb, etc...) man ioperm for more information.
Does the Zephyr OS(version 1.9) needs any BIOS or UBOOT for booting?
Since I am new to this please provide booting process of zephyr RTOS
Zephyr RTOS is designed first of all to be "baremetal" OS. It boots directly on the hardware and initialized it completely itself.
But: first of all, you need to get a Zephyr application onto your board/MCU. This can be done using an in-circuit programmer (JTAG, SWD, etc.), but for end user, it may be more convenient to use an MCU-specific bootloader and upload an app via a UART/USB connection. Note that these MCU specific bootloaders aren't "BIOS" or "UBOOT"
Going further, when new ports appear to architectures which normally use e.g. the U-Boot bootloader, I guess Zephyr port to that architecture will take advantage of that, to make application deployment easier.
Summing up: Zephyr RTOS doesn't need a special bootloader. But it can take advantage of it to make application deployment easier for a user.
I have a bunch of programs that i run as services in FireDaemon (30+ programs). They all "interact" with the desktop as they have a GUI. I usually check the status and change parameters every now and then by going into the Interactive Services via the Detection message.
The server is a Windows Server 2012 R2 and has plenty of resources (20 cores, 72GB ram, plenty of I/O) and usually is very stable. However, if i am in Interactive Services looking at the program statuses or/and modifying anything it frequently locks me out and i cannot get back onto the server. It is essentially frozen.
How can i increase the stability? Is there something i can do to add more resources to the Interactive Services? Is there a limit to how many programs i can run in interactive services?
Any insight helps - thanks!
You either probably already know this or have seen this since you're using FireDaemon Pro, but they have a free techinical preview of a Zero Session viewer that does just that. We have the same issue and when zero session causes problems it's almost like somebody pulled the network cable to that server.
http://support.firedaemon.com/news/24/firedaemon-session-0-viewer-pre-release-now-available.aspx
So far, so good with their session 0 viewer.
Why does it help to know about virtualization from a programmer's perspective? Except testing and developing on several different platforms without the need of switching between operating systems is there a particular reason why virtualization is important for a programmer? Are there any details that must be kept in mind before developing on virtual instances?
I use it for testing our installer, because it is important to check whether the application will work on a clean installation of the operating system.
I used to do these tests by keeping a hard drive with a fresh operating system installation and making a copy of that disk for (almost) every new test run. This was very time consuming, and the virtual machine solution has saved me a lot of time. Note that this even allows you to do remote debugging as easily as when using two non-virtual machines.
Note: If you're interested, I'm using VirtualBox, which is a very good and free virtualization tool.
If you develop a driver or something very close to the hardware with a high risk to crash the machine, you will be glad to be working on a virtual machine.
Reverting to an old state is easier than to repair a damaged OS.
One of the main advantages is having your entire development environment as a single image file. I have a perfectly configured version of Windows Server, Visual Studio, ReSharper, etc. I can easily try a new version of something on a copy of this virtual machine without worrying about it causing problems.
I can also back up my entire dev environment to transfer it to another physical machine very easily. I've been through 3 machines in this office alone so that was a lifesaver in itself.
The only real trade-off I see is performance. You generally have to use less physical CPU cores than you actually have and less memory. With a sufficiently powerful machine this is not much of a problem though.
Edit: As nader said, I/O is obviously important for most projects as well. Although developing on a virtual machine does mean a fairly large I/O penalty compared with a native OS install, in practice I rarely find it to be a problem. The superior random access capabilities of SSDs are helping to mitigate this drawback as well.
Being able to completely reset the state of the system is very useful to debug applications which modify their environment - If the actions are repeated after a reset, and they're constrained to the sandbox environment of the VM, you are pretty much guaranteed to get the same result.
We have a large number of different versions / customer customisations of our software, and its not possible for 2 installs of our software to coexist on the same machine. Virtualisation allows us to replace the 50-60 physical machines that we need to maintain for testing and problem reproduction with 2-3 virtual servers - it takes around 10 miniutes to make a copy of a VHD template we have and create a new virtual machine, and as long as you allocate 1-2Gb of RAM the performance is comparable to that of a (slow) physical machine.
Virtual machines are also great for build machines.
Personally I do all of my development on my deskop machine for best performance, and remote debug into VMs. I dont run virtual machines on my desktop as it uses up too much RAM, we have dedicated virtual servers for that.
Good for developing, because you have same server configuration in virtual machine like on production server.
https://stackoverflow.com/questions/905926/developer-software-setup
From a user space application there should be no difference developing for a virtualised OS versus a normal OS. There may be some gotchas if your code makes explicit assumptions of the machines memory size and number of processors and believes what the hypervisor tells you.
I'm surprised no one has mentioned the ease of deployment. All you need to do is get the build down on the virtual O/S and then you can copy the image to as many new servers (running some kind of virtualization solution [like VMWare]) as you want, easily scaling your application.
Record the state of a bug in a program, and send it to the developer (along with the entire "machine").
Testing your code on various O.S, some of which you don't have.
Working in a more protected environment, making sure that the code doesn't harm your system -useful for understanding dangerous programs, like viruses, and developing security against it, for writing potentially wrong hard-drive programs, and anything that can have catastrophic effects on your system.
Easily Write your own O.S without the need to write on 'real' boot sectors, a potentially harmful act (Hope this is not new...).
Quickly use tools and programs not found on your own O.S.
Demonstrate a program at various times, by restoring a virtual machine,
quicker and less prone to failure, than trying to recreate the state at the minutes before the demo.
Less directly connected to programing, but surfing vie a virtual machine (for example to see documentation) has the added value that your own important system (and code) is less likely to be harmed by malicious programs.
From my experience in most cases the answer is typically "no" (When testing and targeting multiple platforms is removed) Both are huge reasons to be familiar with "desktop" VM solutions. Others have done an excellent job of listing rare exceptions like debugging kernel codes.
There are some quirks one must be aware of when running on a virtual machine. This is hardly an exhaustive list:
Loss of precsision or even time reversal in high resolution timers due to emulation of hardware resources (depends somewhat on the vm platform and operating system)
Virtual network interfaces ususally bridged. We've seen some extremely odd behavior in the host system with an application that sets up its own bridge between virtual interfaces -behavior which logically should not effect the host in one of the leading VM solutions.
Usage models - If your product has orwellian licensing codes or records state dependant behavior when interacting with remote systems you should account for what would happen if a system were "paused" and "restarted" or restarted from an earlier "state". Normally this kind of thing would be taken into account anyway in a robust implementation.
If you are developing in a virtual environment you will want to make sure you know what specifications were used to create the environment. If you have say a 4 Gig machine and create a virtual environment with 1 Gig you will want to make sure things in your development do not grow to a point that it overruns the memory. This will cause slight performance problems. I personally ran into this and it was a pretty tricky thing to track down. The scenario was that I was fixing a bug and testing it in a virtual environment. I did not setup the virtual environment by the way... The application took a performance hit because of all of the memory swapping that was taking place.
A very good use for a virtual environment is when you are developing applications that mess with the Windows Gina. It's much easier to reinstall a virtual environment than an entire PC....(been here done that too).
I do all of my development on a virtual XP instance under VMWare Fusion so that I can use a Mac for everything and still write .NET code ;-)
Sometimes they are necessary, because the platform you are programming doesn't support the standard developer environment. One such example is Sharepoint. As of Sharepoint 2007 you still need a server OS to install Sharepoint 2007, WSS, and the Visual Studio Sharepoint Extensions (VseWSS).
Thus for Sharepoint I have to use a Window Server VM to do my development work. As for Sharepoint 2010 they are supporting installations on Vista and 7 x64, but I will still use a VM, because I don't want to have Sharepoint on my main machine slowing everything down. Rather I want it in a VM where the services are on when needed and off when I don't without having to manually turn off/on each service. This in addition to the many great answers posted above.