How to include custom library files in Unity build - unity3d

When I run my Unity app in the editor, it is able to read my .dlls and other custom files the .dlls need and it works fine. However, when I make a build, it only includes the .dll files in the Plugins folder of the build and not the other custom files. Is there a way to force Unity to include the other files as well? I have tried putting them both in the Plugins and Resources folder before building and in both cases it only keeps the .dlls.

The custom files are .obf, but I don't think that's relevant
It is extremely relevant. Unity does not support all type of libraries.
Here are the supported library extensions:
For Windows, you can use .dll.
For Linux, .so is supported.
For Android, you can only use .aar, rar and .so.
For iOS, .a is used but you can also use the non compiled code such as ,.m,.mm,.c and .cpp.
There is no support for .obf. If you want to add it to your project so that you can load and execute it then you are out of luck.
If you just want to make Unity include it to your final project build so that you can read it then you can. This doesn't mean you can load and execute it.
First, rename the extension from ".obf" to ".bytes". Place it in the Resources folder and read it as TextAsset with the Resources.Load function. You can the access the data with TextAsset.bytes.
TextAsset dataAsset = (TextAsset)Resources.Load("YourObfName", typeof(TextAsset));
byte[] data = dataAsset.bytes;
Not sure how helpful just reading it is but this is here to show you how to include it to the build and read it but you can't execute it unless there is a C# API to do so and I don't think that any API of such kind exist.

Related

How to generate the plugin of "Li2018" in Halide on windows and exploit "load_plugin" in other project?

reccently, I want to test how does the autoscheduler "Li2018" work on GPU. Firstly, I need to load the plugin of this autoscheduler into my project with function "load_plugin("gradient_autoscheduler")" like the example: https://github.com/halide/Halide/blob/master/apps/gradient_autoscheduler/test.cpp but the biggest problem is, that I cannot generate the plugin on Windows. I have tried to add the "generate_autoscheduler" into the CMakeList.txt in the /apps folder, but it can not work. To compare with autoscheduler "Admas2019" which in the folder /apps/autoscheduler, "Li2018" need a CMakeList.txt to generate the dll plugin, Does some one know how to generate the plugin of "Li2018" on windows? Thanks in advance!
As you have noticed, we don't have a CMake configuration for Li's autoscheduler, so Windows is not supported yet. I'll put this in my TODO list, but contributions are always welcome. It shouldn't be hard to come up with a CMakeLists.txt based on the Makefile content.

Deploy files in the localState folder during installation of a store app

I am building an app for windows store and I need some default and example data to be in the localstate folder (Windows.Storage.ApplicationData.current.localFolder) when the app run the first time.
The folder and files structure is a bit complex and I tryed to copy the files at the start of the application, but I can't manage that way.
Is it possible to have files being copied automatically from the installation folder to the localstate folder during the store app installation?
Unfortunately, customization of the app install process isn't currently supported. You have to do this as part of your first run processing.
One possibility is that you include the data in your package as a .ZIP or other compressed file and use an appropriate library to expand that file into a folder structure on startup. That could simplify your logic considerably. (I don't have a library to recommend; it's just an idea.)

Developing with Qooxdoo and multiple developers

I'm interested in Qooxdoo as a possible web development framework. I have downloaded the SDK and installed it in a central location on my PC as I expect to use it on multiple projects. I used the create-application.py script to make a new test application and added all the generated files to my version control system.
I would like to be able to collaborate on this with other developers on other PCs. They are likely to have the SDK installed in a different location. The auto-generated files in Qooxdoo seem to include the SDK path in both config.json and generator.py: if the SDK path moves, the generator.py script stops working. generator.py doesn't seem to be too much of a problem as it looks in config.json for an updated path, but I'm not sure how best to handle config.json.
The only options I've thought of so far are:
Exclude it from the VCS, but there doesn't seem to be a script to regenerate it automatically, so this could be dangerous.
Add it to the VCS but have each developer modify the path line and accept that it might need to be adjusted whenever changes are merged.
Change config.json to be a path and a single 'include' line that points to a second file that contains all the non-SDK-path related information.
Use a relative path to the SDK and keep a separate, closely located copy of the SDK for every project that uses it.
Approach 1 would be ideal if the generation script existed; approach 2 is really nasty; I couldn't get approach 3 to work and approach 4 is a bit messy as it means multiple copies of the SDK littered about the place.
The Android SDK seems to deal with this very well (using approach 1), with the SDK path in its own file with a script that automatically generates that file. As far as I can tell, Qooxdoo puts lots of other important information in config.json and the only way to automatically generate that file is to create a new project.
Is there a better/recommended way to deal with this?
As an alternative to using symlinks, you can override the QOOXDOO_PATH macro on the command line:
./generate.py source -m QOOXDOO_PATH:<local_path_to_qooxdoo>
(Depending on the shell you are using you might have to apply some proper quoting of the -m argument). This way, every programmer can use his locally installed qooxdoo SDK. You can even drop the QOOXDOO_PATH entry from config.json to enforce this.
We work with a symbolic link pointing to the sdk ... config.json contains just the path of the link.

How to setup a DotNetNuke Development Environment with Source Control?

My team is developing a new DotNetNuke web application and would like to know what is recommended to setup a development environment with source control and automated builds? We would like to keep the DNN source code separate from our custom modules and extensions source code.
The DotNetNuke Compiled Module template for Visual Studio wants us to store the source code in the DesktopModules directory of the DNN source code and output to the DNN source code bin directory. Is this the recommended structure? I would rather keep the files in different locations, but then it becomes more difficult to run and debug locally as it would require an install of the module for each change. Also, how should an automated build deploy any changes?
How have others set this up? Is there a recommended best practice?
For my source control, I develop modules in their own project. This contains the module code, test code, data provider code (if applicable) and anything else. This is checked into source control like any other project. Note that the module project contains no links to a specific DNN website, and DNN references are made in the project to a common "bin" directory that references your target build. For example, in my projects folder, I have \bin460 , \bin480, \bin510, \bin520 etc. Each of these folders holds a set of binaries for a specific DNN version. That way you can build against a particular version but test against any version you like.
The problem with source-controlling a module in place in a dnn install is
- sometimes not all of the module code is easily isolated under a single parent directory
- doesn't lend well to a PA module approach
- not easy to shift the project to a different DNN Version for development or testing
- easy to inadvertently source control parts of the DNN solution, particularly with integrated VS source control solutions.
This approach compiles quickly because you're not trying to compile the entire project. For test deployment I have a build script that copies the various parts of the module into a target website. This can be done via the compile (link the build script) or just run after you've had a successful compile in a cmd window. My build script has a 'target' environment switch, so that I can say 'dnn520' to deploy the build to my test dnn520 install. Note that you need to manually create the module configuration first before this will work, but this is a one-time effort, and you can use the export feature to create your .dnn module manifest.
To build your module package, invest the time in a comprehensive script which will take the various parts from your source directory, and zip them into an install package. Keep all of the parts in your source control folder, and copy them into a temp directory, then run a command-line zip utility (I use an ancient version of pkzip) to pack it into an installable file.
The benefits of this approach is :
- separation of module code from installed code
- simple way of keeping only the module code in source control (don't have to exclude all the website code)
- ability to quickly test out modules in different dnn versions
- packaging script allows you to quickly and easily build a new version of a module for install testing/deployment
The drawbacks are
- can't use the magic green 'go' button in VS (have to manually attach debugger)
- more setup time than developing in-place
We typically stick to keeping the module code in a folder under DesktopModules and building to the website's bin directory.
In source control, we just map the individual modules, rather than the entire website. Depending on what we're working on, a module may be an entire project in source control, or we may have multiple related modules in the same project, living next to each other.
Automatically deploying changes is somewhat difficult in DNN. It's highly recommended to have a build script that packages your module into an installable form. You can then copy installable packages into the website's Install/Module folder, and get the URL /Install/Install.aspx?mode=InstallResources, which will install any packages in that folder.
In response to bduke's answer. You should, and don't want to build projects in the DesktopModules folder.
That's where all of the source code for the site out of the box goes.
That's where you modules will be "installed" and thus if someone "updates" or re-installs one, then it will be overwritten
It can make upgrading your Application far more difficult. Many developers don't understand that the idea of not touching the original source code files to modify their behavior. BECAUSE it will just be overwritten when you perform an upgrade.
If you want to build modules, create a solution folder called Modules and place your seperate project modules there.
If you want to debug them, make the target debug output point to the web\bin folder.
If you want to install/deploy them. Build it in release mode and install them through the Module/Extension filter.

Adding a static library to an iPhone project

The motivation for this question is me trying to get LDAP functions to work with an iPhone application which is a project I'm attempting for part of my dissertation.
When I was developing the application I used the ldap.framework framework that is part of Mac OS X. This works fine in the simulator, but when I try to now get the app on a device it tells me that I'm not allowed to use this framework.
After some research I found that I could build openldap using the arm architecture and add the static library to my application destined for my device.
I eventually managed to compile configure and build openldap by setting variables as mentioned here and using the following commands...
Ade$ ./configure CC=$DEVROOT/usr/bin/arm-apple-darwin9-gcc-4.0.1 \
LD=$DEVROOT/usr/bin/ld --host=arm-apple-darwin --with-yielding_select=yes
Ade$ make depend
Ade$ make
I was told that the file I'm looking for will have an extension of '.a' so I searched for a '.a' file that mentions ldap...
Ade$ sudo find / -name *ldap*.a
Password:
/Users/Ade/Desktop/openldap-2.4.16/libraries/libldap/.libs/libldap.a
/Users/Ade/Desktop/openldap-2.4.16/libraries/libldap_r/.libs/libldap_r.a
So I assume these are the files I need?
My question is what do I do next? I know I need to add the library to the Xcode project and probably add a load of '.h' files too?
If anyone can give me a pointer to documentation or shed any light on the next stage I would be really grateful.
Many thanks,
Ade
ps. I have also talked about this process on my blog at www.greenpasta.com.
I've done this same thing to build an LDAP client for the iPhoneOS 2.2. You just to drag the .a into the "link with libraries" build stage. I recommend using the regular (non _r) version of the library, unless you specifically need reentrancy on your ldap stuff (which I don't recommend). You can also add the .h's to your project, which is generally the easiest way to access them.
Simply drag the .a files into the Xcode project and choose "copy files into project". I'm not familiar with OpenLDAP but I think the _r version is just a threadsafe version. I would recommend using that and not copying the other. You should probably not copy both files into Xcode or you will get link errors.
Then do the same for the .h files that define the client APIs of OpenLDAP - again I'm not sure which these are but I'm sure you can find out easily.
I would advise organising the .a and .h files together in a Xcode group under resources.
Include the header files in your source and you should be good to go.
You may also need to add -lldap to your linker command (in the build settings pane).