Altera FPGA EP4CE55F23C7N - eclipse

I currently working on an Altera FPGA. In this project, we were given only the soft software source code using Nios II Software Build Tools for Eclipse I like to run and build the original code before making any changes to it. However, only a .pof programming file and a .cdf file was given to us, so when setting the Hardware design setup, please correct me if I’m wrong, I need the SOPC file and .sof file to configure the hardware?. I tried using the .pof or .cdf files in some way but with no success. Is there a way I can compile and build this application somehow using the .pof file or other way and thus be able to create a bsp project and load the .elf file to the FPGA. Or do I need this .sopcinfo and .sof files. Thank you.

Related

B&R Automation Studio avoid restarting PLC when building the same source code in different locations or machines

When building the same source code for B&R PLC's in different paths on your PC it wants to restart the PLC, since the programs are laid out differently on in the new build. This is also an issue when building the same source on another PC after fx pulling down code from a repository.
Is there a way to configure Automation studio, or connect to the running plc and get the binaries from the PLC and not having to restart it?
The build and transfer with AS has several stages. At some point binaries are created, which in turn are then transformed to data objects (*.br files). The latter has a CRC and some encryption (I believe). So every task will end up being a data object (sometimes called module).
The data objects are what is actually transferred to the PLC. With the Runtime Utility Center (RUC) you can in theory download the data objects from the PLC, but this will not help you for your issue.
If you want to avoid a warmstart for simple changes you need to have the binaries and data objects in your project directory. Notably the Temp and Binaries folders. Otherwise AS will consider your next build a rebuild which requires a warmstart after transfer.
If you have a buildchain together with your repository you might consider storing the Binaries etc. as artifacts. I know of some companies doing exactly this.
The option which I have used in the past is to utilize the RUC to transfer only the programs you have modified. First build your project after modifying it. The open the RUC and select Create, modify and execute projects. Here you can basically do some scripting. In the toolbox you can find Module Functions which allows you to download data objects to the PLC after establishing a connection. Just select the task you want to transfer in the binaries folder of your project.
It might also be possible to modify the Transfer.lst, also located in the Binaries, but I haven't tried this myself.
I hope this helps.

Matlab production server add methods dynamically to deployable archive

i am developing a web site where users can test Matlab methods and their own.
i am using Restful api of Matlab, and i am linking it to nodejs .
I want to know if there is a way to add a Matlab files to the deployable archive
while the matlab server is running.
No, you can't do that.
The workflow is that you get your code, you use MATLAB Compiler SDK to compile the code to a .ctf file, and you deploy the .ctf file to MATLAB Production Server. Once the code is compiled to a .ctf file it is fixed, and can't be modified.
If you need to modify it, you need to modify the code, recompile to a new .ctf file, and redeploy the new file.

opengl + glew in Eclipse (for windows)

I'm trying to get glew to work under eclipse (mingw) in windows. Seems as if it is extremely unusual not to use Visual Studio in this context. The install instructions for glew is simply "use the project file in build/vc6/"...
The glew readme also writes:
"If you wish to build GLEW from scratch (update the extension data from
the net or add your own extension information), you need a Unix
environment (including wget, perl, and GNU make). The extension data
is regenerated from the top level source directory with:
make extensions"
In order to get glew to work in eclipse and windows I have to compile it in a unix environment? Is there no other way?
Sure, it would probably be a learning experience to pull that off (if I were to succeed) but I feel that my time is best spent actually working on my project. And even if I did manage to crosscompile everything, would it work in anything but Visual Studio?
Is the whole thing unfeasible and the best solution is to install Visual Studio?
Google haven't been of much help, I feel like I am the only one that has ever attempted to do this (is there a good reason this?).
Well if you still require some flexibility that the VS compiler doesn't always hold, you could try downloading the glew source zip file (on their main sourceforge page). Saying you have to have a Unix environment in order for it to work with eclipse is a huge mistake, as I have it working with MinGW at the moment. Just download the source, extract it, and create/put this batch file into the directory with "Makefile":
#echo on
set SYSTEM=mingw
set GLEW_DEST=C:\...[where you extracted it to]...\glew-1.7.0\usr
path = %PATH%;C:\MinGW\msys\1.0\bin;
make all
make install.all
pause
Change ...[where you extracted it to]... to the path you extracted the downloaded source zip to. Save that and run it, and you should see a "usr" folder containing all the dlls, libs, and headers you'll need. Copy those over to their respective OpenGL counterparts (or just anywhere where you'll be able to specify them in Eclipse later).
Now, in Eclipse, make a new project and at least be sure to include this somewhere:
#ifndef GLEW_STATIC
#define GLEW_STATIC
#endif //GLEW_STATIC
#include <Windows.h>
#include <GL/glew.h>
#include <GL/wglew.h>
If you put the glew headers somewhere besides the OpenGL headers, you may not have to use GL/. Now include the libraries by going into Project->Properties->C/C++ Build->Settings->Tool Settings->MinGW C++ Linker->Libraries and add the following libraries:
glew32
opengl32
glu32
glew32.dll
Add any library search paths you'll need. In my case I just used "C:\MinGW\lib" as a second measure.
Now save all your project files, use Project->Clean..., and build your project. If you don't get any glew errors and your project is prepared you should be able to run it.
Hope that works! It did for me.
Try the following:
Download the Windows 32-bit binary for GLEW here: http://glew.sourceforge.net/index.html
Follow the instruction to link your project to GLEW: http://glew.sourceforge.net/install.html
Make sure your Eclipse is also setup to compile with mingw. (I assume you've done this.)
cout << "Hello world!";

How to setup a DotNetNuke Development Environment with Source Control?

My team is developing a new DotNetNuke web application and would like to know what is recommended to setup a development environment with source control and automated builds? We would like to keep the DNN source code separate from our custom modules and extensions source code.
The DotNetNuke Compiled Module template for Visual Studio wants us to store the source code in the DesktopModules directory of the DNN source code and output to the DNN source code bin directory. Is this the recommended structure? I would rather keep the files in different locations, but then it becomes more difficult to run and debug locally as it would require an install of the module for each change. Also, how should an automated build deploy any changes?
How have others set this up? Is there a recommended best practice?
For my source control, I develop modules in their own project. This contains the module code, test code, data provider code (if applicable) and anything else. This is checked into source control like any other project. Note that the module project contains no links to a specific DNN website, and DNN references are made in the project to a common "bin" directory that references your target build. For example, in my projects folder, I have \bin460 , \bin480, \bin510, \bin520 etc. Each of these folders holds a set of binaries for a specific DNN version. That way you can build against a particular version but test against any version you like.
The problem with source-controlling a module in place in a dnn install is
- sometimes not all of the module code is easily isolated under a single parent directory
- doesn't lend well to a PA module approach
- not easy to shift the project to a different DNN Version for development or testing
- easy to inadvertently source control parts of the DNN solution, particularly with integrated VS source control solutions.
This approach compiles quickly because you're not trying to compile the entire project. For test deployment I have a build script that copies the various parts of the module into a target website. This can be done via the compile (link the build script) or just run after you've had a successful compile in a cmd window. My build script has a 'target' environment switch, so that I can say 'dnn520' to deploy the build to my test dnn520 install. Note that you need to manually create the module configuration first before this will work, but this is a one-time effort, and you can use the export feature to create your .dnn module manifest.
To build your module package, invest the time in a comprehensive script which will take the various parts from your source directory, and zip them into an install package. Keep all of the parts in your source control folder, and copy them into a temp directory, then run a command-line zip utility (I use an ancient version of pkzip) to pack it into an installable file.
The benefits of this approach is :
- separation of module code from installed code
- simple way of keeping only the module code in source control (don't have to exclude all the website code)
- ability to quickly test out modules in different dnn versions
- packaging script allows you to quickly and easily build a new version of a module for install testing/deployment
The drawbacks are
- can't use the magic green 'go' button in VS (have to manually attach debugger)
- more setup time than developing in-place
We typically stick to keeping the module code in a folder under DesktopModules and building to the website's bin directory.
In source control, we just map the individual modules, rather than the entire website. Depending on what we're working on, a module may be an entire project in source control, or we may have multiple related modules in the same project, living next to each other.
Automatically deploying changes is somewhat difficult in DNN. It's highly recommended to have a build script that packages your module into an installable form. You can then copy installable packages into the website's Install/Module folder, and get the URL /Install/Install.aspx?mode=InstallResources, which will install any packages in that folder.
In response to bduke's answer. You should, and don't want to build projects in the DesktopModules folder.
That's where all of the source code for the site out of the box goes.
That's where you modules will be "installed" and thus if someone "updates" or re-installs one, then it will be overwritten
It can make upgrading your Application far more difficult. Many developers don't understand that the idea of not touching the original source code files to modify their behavior. BECAUSE it will just be overwritten when you perform an upgrade.
If you want to build modules, create a solution folder called Modules and place your seperate project modules there.
If you want to debug them, make the target debug output point to the web\bin folder.
If you want to install/deploy them. Build it in release mode and install them through the Module/Extension filter.

Do I have to build my LabVIEW instrument driver under Program Files?

I'm trying to build a LabVIEW plug and play instrument driver project for a device we sell. I followed the instructions to create a project, and it created the project in with the LabVIEW program:
C:\Program Files\National Instruments\LabVIEW 2011\instr.lib
I suppose I could connect that folder to source control and just do all the work there, but it feels weird to be working under Program Files. When I tried to move the project folder out into my regular workspace folder, it broke all the subpalette files (*.mnu). I could recreate them, but I'm afraid they wouldn't work for our customers when they install the driver from the LabVIEW web site.
Is it possible to move a driver project around, or does it have to stay in the default location? If one of our customers has installed LabVIEW in a different location (say on drive D:) will the driver menus not work for them?
I'm not in favour of user.lib for SCC'd items. using several LabVIEW versions at a time is a big problem.
Here is my routine:
Create the instrument library and save all code in a folder starting with an underscore ('_') (_foo)
Create an .mnu file in the parent folder of '_foo' Mylib.mnu, add the icons you need.
With OpenG package builder I create an installer routine that placed the the mnu file and the folder in instr.lib
After a restart of LabVIEW the instrument driver shows up in the instruments palette.
If you keep the code in the same relative position to the mnu file there is no problem with missing VIs.
Ton
Instrument drivers are always located in the 'instr.lib' folder in the current LabVIEW version folder. There is an environmental path set up in LabVIEW for this intrument driver folder so it will always point to the correct drive for the installation of LabVIEW used.
You should keep the folder in the location used by the wizard to ensure that when distributed to your customers the sub palette menus point to the correct location and all the VIs link correctly.
I use source control for user.lib which is in a similar location and have no problems.