This question already has answers here:
CMSIS & STM32, How to begin? [closed]
(2 answers)
Closed 1 year ago.
As I want to write an efficient program to use minimal RAM & Flash, I want to remove HAL library completely from my project & program only in registers.
I want to use cubeIDE for compiling & Debugging but I do not know how to remove HAL library from my project(It seems that HAL library created and attached to project by default when generating project).
Is there any practical way?
Best!
There is an option in STM32CubeIDE project generation which allows you to create empty projects.
The empty project comes with the following:
main.c : Mostly empty
syscalls.c : I don't know what it is for but probably useless.
sysmem.c : Implements _sbrk() function, which is used by malloc() & new()
startup_stm32[xxxxxxxx].s : Startup file in assembly. You can leave it as it is
[xxxxxx]_FLASH.ld : Linker script file. Most of the time, this can be left unchanged.
But you need some additional libraries & files.
CMSIS Library : This includes some core functions common to all Cortex M devices. The core library is header only, and it's the only one you need to get started. There are some additional CMSIS libraries, like the DSP library which you may need depending on your project requirements. I suggest downloading it from its official repository.
Official STM32 headers from ST : This is actually called STM32Cube[xx] (STM32CubeF4 for example) and includes the Cube & HAL framework you want to get rid off. But we're interested in CMSIS compliant device headers. You can delete the rest. It also includes a version of CMSIS which lags behind the official one. Since you can download the latest CMSIS from its official repository, you don't need the one included in Cube package. You can download the relevant package from ST. For example, this one is for F4 series.
Once you have the needed packages, you need to configure STM32CubeIDE such that your project uses the newly obtained libraries. Basically, you need to add some additional include directories and symbol definitions. And there is an additional system_stm32[xxxxx].c file, which can be found in STM32Cube package and needs to be included in your project.
Here you can find a somewhat related answer.
Here is an example STM32CubeIDE blinky project I've created for the Blue Pill board (STM32F103C8). It may be somewhat outdated but it's probably still useful.
The method I've described probably isn't very practical. Some people suggest creating a normal Cube & HAL project and than pruning the unused parts.
Related
I am working with STM32 B-L072Z-LRWAN1 discovery kit. How can I add I-CUBE-LRWAN libraries externally, after i have created a project on CubeMX for B-L072Z-LRWAN1 discovery kit. Because project which i have created have not radio libraries. I am coding with System Workbench.
Before this project, i used Ping-Pong example. It was divergent for create a fresh project. So i am trying to make a new project for Lora.
Thanks for answers.
I'm going to answer it from te point of view of a Keil project because that is the program wherein I've done it but these steps should be interchangeable with the IDE. It mostly involves copying the code and adding the right linker and source paths.
The I-CUBE-LRWAN project has been setup such as that the parts of the project have been separated. The root directory of the project consists of three folders: Middlewares, Projects and Drivers. These folders contain both .h and .c files inside their tree structure.
Drivers
The Drivers folder contains all the files related to the specific board you are using, the HAL (Hardware Abstraction Layer), CMSIS and the BSP (Board Support Package). The HAL and CMSIS provide a generalized interface towards the device and when using these creates code that can be ported to other STM32 platforms by changing out the CMSIS board specific definitions. I would recommend that when you create a project you tick the box to include all library files in your project. This will make compilation take longer and your project bigger but will also prevent you from fussing about with missing libraries. The BSP contains Board Specific interfaces for interfaces present on your platform. This includes LEDs and buttons on the L072Z-LRWAN1 and in case of the STM32l4 nucleo the joystick/LCD.
I would recommend that you copy your board specific BSP (.h and .c) into your project and use them as a standardized interface to board specific features. You should create a new BSP .h/.c pair when you are using a custom board.
Projects
The Projects contains your project specific code and contains the business end of your application. A bit of a bump in the road comes up here as ST has chosen to implement all their LoRaWAN code inside the main.c file. I would recommend that you take out all the LoRaWAN related initialization and transmission code (generally anything related to LoRaMainCallback_t) and put it inside a separate file with a defined interface. This is a bit of work but will pay dividend in the fact that your normal code has been separated from your LoRaWAN handling. I've stored my LoRaWAN code inside the file lorawan.<h|c>. With regard to the rest of the files: move the contents to a separate folders in your project called LoRaWAN/App/inc/ and LoRaWAN/App/src/. This pertains to the files: debug.c, hw_gpio.c, hw_rtc.c, hw_rtc.c, vcom.c, debug.h, hw.h, hw_conf.h, hw_gpio.h, hw_msp.h, hw_rtc.h, hw_spi.h, utilities_conf.h and vcom.h. Add the inc folder to your include path (the -I option) and the source files to your project.
Middlewares
The Middlewares folder needs to be copied from the source project to your target project and every inc folder or folder with .h files needs to be added to your include path and every source file needs to be added to your project. I would recommend that the folder structure is kept inside your IDE as to make your navigation between project and folder structure easier.
Another postive effect of keeping the folder structures similar is that upgrading your code with a newer stack should be easier as the files can be found in a somewhat same place in both projects.
Preprocessor defines
And a most important step. You need to take the DEFINE parameters defined inside the project and copy the into your own project. This can be found inside the Keil project (and the other projects also but in other places) under: options for target x->C/C++->define. It contains something akin to these values: STM32L072xx,USE_B_<board name>,USE_HAL_DRIVER,REGION_EU868,DEBUG,TRACE. As you can see I'm using: a Murata radio with integrated STM32, the EU 868 region and debugging and tracing options.
It should be possible to include "hw.h" and compile your program when you've done everything right.
I want to use two Modelica libraries together, in Dymola, so for convenience I wrote a little script, loadLibraries.mos that just opens the two libraries.
But they use different versions of the MSL (3.2.1 versus 3.2.2), defined by the uses annotation in the top level package.mo:
annotation(uses(Modelica(version="3.2.1")));
The library developed by us uses 3.2.2, the library that uses MSL 3.2.1 is developed by someone else.
Now whenever I run the mos script (or when I open the two libraries manually), Dymola wants to run an update script. As far as I can see, nothing gets changed by the update script, so I would like to
either not run it at all, e.g. by defining a range of accepted versions like annotation(uses(Modelica(version>="3.2.1")));
or always run it, without asking first, e.g. by setting some flag AlwaysSilentyAcceptMSLUpgrade.
Under Edit, Options, Version there is a checkmark Force upgrade of models to MSL version but I am unsure how to use it from my mos script (for all users).
My pragmatic solution would be to ask yourself if your own library really needs anything from 3.2.2 which is not yet present in 3.2.1. Hence change your library to only require 3.2.1. Or the other way round (given you can change the package.mo of the other library) change the uses annotation there to 3.2.2
Don't change your own library, but make the library using Modelica 3.2.1 read-only (e.g. by making the files read-only).
That should skip the prompt (at least from Dymola 2016) - and as far as I understand you don't edit that library yourself anyway.
That works for libraries that don't need any update between the versions; which obviously holds for Modelica 3.2.1->3.2.2 since there is no conversion - but it would also work if there were a conversion that didn't influence this particular library.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Wether you call it Addons, Plugins or extra peices of code that is connected with the original software later, it really doesn't matter. I would love to understand how they work, there has to be a simple explanation of how to design a Plugin System. Unfortunately, I never understood it, and there remains a lot of open question in my mind. For example, how does the program find a plugin? how does it interface with it? when is it preferable for a software to have Plugin System?
Thanks for all helpful answers. It seems I asked too open question, fortunately I got keywords to look for. I liked David answer though I am not a Java guy, but his talk made sense to me :)
Plug-ins work by conforming to well-known interfaces that the main application expects to work with.
There are several ways in which a plug-in architecture actually works, but in general, these are the steps:
Plug-ins are designed to match an interface that the application expects. For example, a simple application might require that plug-ins implement a IPlugin interface.
Plug-ins are loaded by the application, usually when the app is starting up
Plug-ins are often provided access to much of the data that the application manages. For example, Firefox plug-ins can access the current web page, and Eclipse plug-ins can access the open files.
Here are two ways (out of several) in which an application can find plug-ins:
The plug-ins are known to exist in a particular folder, and the application knows to load plug-ins from that folder
Each plug-in runs as a service, and the services are designed to work together (this is how an OSGi-based application works)
When plug-ins are found, they are loaded by the application (sometimes the job of a Class Loader).
A software architect might design a plug-in architecture when they expect that either the software provider or the user community will implement new features that were not originally part of the system. Two great examples are Eclipse and Firefox; other applications include Adobe Photoshop (for artistic techniques and graphical tools) and Winamp (for visualizations).
Create an interface that all plugins of a particular type will implement
Write the code that will 'consume' the plugin against the interface only.
Have a dynamic way to load a DLL containing the plugin type that implements your interface (for instance, have a configurable folder location to test whether any DLLs in that folder contain any types that implement your interface, and dynamically load any that do. In .NET this might use Assembly.LoadFile())
If you want to have a look at some source code, Paint.NET is free and open source, and has a plugin architecture.
A program typically has to be designed to look for a plug-in, and the plug-in has to have a standard access point to accept control from the main program. Every application or website does it a little differently.
The simplest type of plug-in is accessed something like this:
if (a plug-in exists/is configured)
call predefined plug-in code
In this case, the main program is coded to only handle a specific set of plug-ins (many php-based wordpress templates are like this). A slightly more advanced plug-in
perform application specific logic
if any plug-in exists that exposes the run_after_app_specific_logic function
call plug-in code
This second case can handle ridiculously complex plug-ins ... the plug-in would just need to implement more functions called by the master program.
Eclipse in an example of a application-framework which is entirely plugin-based, meaning that all functionality is implemented as plugins. There is a thin layer at the bottom for startup/shutdown and plugin-management, but everything else is implemented as plugins on top of that. This results in a framework which can be used for just about everything. More info about Eclipse plugin architecture can be found here: http://www.eclipse.org/articles/Article-Plug-in-architecture/plugin_architecture.html.
It's very language dependent.
In an interpreted language it simply involves calling a file that follows a pattern.
In C it's pretty hard to do without help. In C+windows a "DLL" can be a plug-in and are often used that way.
In an OO language with reflection, you might create an object that implements an interface and load it reflectively. After it's loaded, you can ignore the fact that it was a plug-in because it's treated as any other object in your code.
.net has a plugin architecture (is it COM?) Well anyway COM can be used as (is?) a plugin system.
Your question is probably too open-ended because of all the possibilities. There is no single answer.
I've never written a plugin system. But this is how I imagine it in my head:
Your program has a subdirectory for plugins (e.g. "C:\Program Files\My Program Name\plugins").
You create plugins as DLL files and place them in the plugins folder.
These DLLs would export functions with predefined names.
When you run your program, it looks through all the DLLs in your plugins folder. In each one it would look for an exported function with a certain name (e.g. "Load") and call that function. The plugin could then do any setup that it needed to do.
The program would then call an exported function on the plugin with a name like "GetPluginName". The plugin would return it's name and the program could then use that name when it displays a list of plugins to the user.
When it comes time to invoke the plugin, the program would call another exported function (maybe "Activate") and probably pass the plugin a pointer to the data that the plugin is going to work on. The program would then do its work on the data.
The plugin might also export another function that the program would call to show a setup dialog where you could change the plugin options.
A plugin system can be implemented in many ways, but the common way for a lot of C/C++ applications is a DLL-based plugin SDK.
The DLL will expose various automated function calls which may allow the plugin to "set itself up" in the running application such as adding menu items, new functionality or extra options for systems (like 3D rendering implementations).
More ofthen there's no need for any special discovery - the plugin mechanizm is generally dumb: Here's a code signature I understand, and here's a call(s) I can make. I have no clue how the thing I'm calling will do the job, but I expect result to be in certain format. And that is pretty much a contract. Now - the plugin will implement the contract and make itself available. In Java, for example "make available" simply means that implementing classes are loaded into memory. JDBC driver for a particular database would be a good example.
I'm trying to write an SSH client for the iPhone, and I'd like to use the libssh2 open source library to do so. It's written in C.
How should I include this C library for my iPhone app? Should I compile it into some binary that I include into the my app, or do I add all the source to my project and try to compile it along with the rest of my app?
I'm interpretting this question as:
"Should I compile the C library code once, and include the binary library in my project? Or should I include all the source and compile it every time I build my app?"
It depends. One of the projects I work one depends on several external libraries. Basically, we have a simple rule:
Do you think you will need to change code in the C library often?
If you will be changing the code, or updating versions often, include the source and build it with the rest of your project.
If you're not going to change the code often or at all, it might make sense to just include the pre-built binary in your project.
Depending on the size of the library, you may want to set it up as a distinct target in your project, or for even more flexibility, as a sub-project of your main project.
If I was in your place, I would build libssh2 ahead of time and just include the binary library in my iPhone project. I would still keep the libssh2 source around, of course, in case it does need to be re-built down the road.
I have an iPhone app that is 90% c. I have had no problem adding 3rd party sources to my project and compiling. I am using Lua, zLib, and libpng with no modifications. I've also included standard libraries like unistd and libgen and they just workâ˘
The Three20 iPhone library has a great howto on adding their library to your xcode project. Give that a shot.
I think you will find in the long run you will be better off building it into a standalone library and linking it with your application. This makes it easier to integrate into future apps. Another benefit is that it encourages code separation. If you feel pretty confident with the library, you can link your debug exe to the release build of the library and get some extra performance.
I can't really think of any downsides to creating a library, after the initial cost of setting it up, and having an extra project to modify if you have some changes that need to be made to all your projects. Even if you don't know how to make a library for the iPhone, this is a good excuse to learn.
Just adding the source to you project should work fine as well.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I've met quite a few people lately who says that 3rd party libraries doesn't belong in version control. These people haven't been able to explain to me why they shouldn't yet, so I hoped you guys could come to my rescue :)
Personally, I think that when I check the trunk of a project out, it should just work - No need to go to other sites to find libraries. More often than not, you'd end up with multiple versions of the same 3rd party lib for different developers then - and sometimes with incompatibility problems.
Is it so bad to have a libs folder up there, with "guarenteed-to-work" libraries you could reference?
In SVN, there is a pattern used to store third-party libraries called vendor branches. This same idea would work for any other SVN-like version control system. The basic idea is that you include the third-party source in its own branch and then copy that branch into your main tree so that you can easily apply new versions over your local customizations. It also cleanly keeps things separate. IMHO, it's wrong to directly include the third-party stuff in your tree, but a vendor branch strikes a nice balance.
Another reason to check in libraries to your source control which I haven't seen mentioned here is that it gives you the ability to rebuild your application from a specific snapshot or version. This allows you to recreate the exact version that someone may report a bug on. If you can't rebuild the exact version you risk not being able to reproduce/debug problems.
Yes you should (when feasible).
You should be able to take a fresh machine and build your project with as few steps as possible. For me, it's:
Install IDE (e.g. Visual Studio)
Install VCS (e.g. SVN)
Checkout
Build
Anything more has to have very good justification.
Here's an example: I have a project that uses Yahoo's YUI compressor to minify JS and CSS. The YUI .jar files go in source control into a tools directory alongside the project. The Java runtime however, does not--that has become a prereq for the project much like the IDE. Considering how popular JRE is, it seems like a reasonable requirement.
No - I don't think you should put third party libraries into source control. The clue is in the name 'source control'.
Although source control can be used for distribution and deployment, that is not its prime function. And the arguments that you should just be able to check out your project and have it work are not realistic. There are always dependencies. In a web project, they might be Apache, MySQL, the programming runtime itself, say Python 2.6. You wouldn't pile all those into your code repository.
Extra code libraries are just the same. Rather than include them in source control for easy of deployment, create a deployment/distribution mechanism that allows all dependencies to easily be obtained and installed. This makes the steps for checking out and running your software something like:
Install VCS
Sync code
Run setup script (which downloads and installs the correct version of all dependencies)
To give a specific example (and I realise this is quite web centric), a Python web application might contain a requirements.txt file which reads:
simplejson==1.2
django==1.0
otherlibrary==0.9
Run that through pip and the job is done. Then when you want to upgrade to use Django 1.1 you simply change the version number in your requirements file and re-run the setup.
The source of 3rd party software doesn't belong (except maybe as static reference), but the compiled binary do.
If your build process will compile an assembly/dll/jar/module, then only keep the 3rd party source code in source control.
If you won't compile it, then put the binary assembly/dll/jar/module into source control.
This could depend on the language and/or environment you have, but for projects I work on I place no libraries (jar files) in source control. It helps to be using a tool such as Maven which fetches the necessary libraries for you. (Each project maintains a list of required jars, Maven automatically fetches them from a common repository - http://repo1.maven.org/maven2/)
That being said, if you're not using Maven or some other means of managing and automatically fetching the necessary libraries, by all means check them into your version control system. When in doubt, be practical about it.
The way I've tended to handle this in the past is to take a pre-compiled version of 3rd party libraries and check that in to version control, along with header files. Instead of checking the source code itself into version control, we archive it off into a defined location (server hard drive).
This kind of gives you the best of both worlds: a 1 step fetch process that fetches everything you need, but it doesn't bog down your version control system with a bunch of necessary files. Also, by fetching pre-compiled binaries, you can skip that phase of compilation, which makes your builds faster.
You should definitively put 3rd party libraries under the source control. Also, you should try to avoid relying on stuff installed on individual developer's machine. Here's why:
All developers will then share the same version of the component. This is very important.
Your build environment will become much more portable. Just install source control client on a fresh machine, download your repository, build and that's it (in theory, at least :) ).
Sometimes it is difficult to obtain an old version of some library. Keeping them under your source control makes sure you won't have such problems.
However, you don't need to add 3rd party source code in your repository if you don't plan to change the code. I tend just to add binaries, but I make sure only these libraries are referenced in our code (and not the ones from Windows GAC, for example).
We do because we want to have tested an updated version of the vendor branch before we integrate it with our code. We commit changes to this when testing new versions. We have the philosophy that everything you need to run the application should be in SVN so that
You can get new developers up and running
Everyone uses the same versions of various libraries
We can know exactly what code was current at a given point in time, including third party libraries.
No, it isn't a war crime to have third-party code in your repository, but I find that to upset my sense of aesthetics. Many people here seem to be of the opinion that it's good to have your whole developement team on the same version of these dependencies; I say it is a liability. You end up dependent on a specific version of that dependency, where it is a lot harder to use a different version later. I prefer a heterogenous development environment - it forces you to decouple your code from the specific versions of dependencies.
IMHO the right place to keep the dependencies is on your tape backups, and in your escrow deposit, if you have one. If your specific project requires it (and projects are not all the same in this respect), then also keep a document under your version control system that links to these specific versions.
I like to check 3rd party binaries into a "lib" directory that contains any external dependencies. After all, you want to keep track of specific versions of those libraries right?
When I compile the binaries myself, I often check in a zipped up copy of the code along side the binaries. That makes it clear that the code is not there for compiling, manipulating, etc. I almost never need to go back and reference the zipped code, but a couple times it has been helpful.
If I can get away with it, I keep them out of my version control and out of my file system. The best case of this is jQuery where I'll use Google's AJAX Library and load it from there:
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js" type="text/javascript"></script>
My next choice would be to use something like Git Submodules. And if neither of those suffice, they'll end up in version control, but at that point, its only as up to date as you are...