I create tools for the Unity Asset Store and one of my tools is in conflict with other assets people bought from the store which is resulting in unwanted errors. From talking this over with people they said that I should just "namespace TrollBridge{}" EVERY script. Would this be a way of doing it or do I only need to do certain scripts? Even data structure scripts? If it is just certain scripts what exactly am I looking for about these scripts to throw the "namespace TrollBridge {}" on it? I think I understand the whole encapsulation concept with this but maybe I am missing something when it comes to selling tools for other people? Thanks in advance.
Would this be a way of doing it
Yes.
or do I only need to do certain scripts?
Do it for all your scripts.
Even data structure scripts?
Yes, even that. All your classes for this should be in a namespace.
To make this answer short, put all your scripts in a namespace. The reason for this is that you will be distributing this with thousands of people or even hundreds of thousands.
Let's say that someone is using another plugin called Lighting and that plugin has a class called Lighting. Ask yourself what happens when you release your own plugin with a class called Lighting?
I have seen this happen before between two plugins which led to many complains. The publisher had to add namespace to all their script which broke many old projects.
Do it right now so that you won't have this problem in the future. Name your namespace a name you think does not exist in the Asset Store and resonates with the function of your plugin.
Related
I've been diving into some of the more advanced features of powershell modules and manifests recently, with a view to handling scenarios more advanced than just a basic export of a few functions. It sounds like it should be obvious, but I'm struggling to find a nice solution for sharing common 'helper' type functions across several large non trivial modules. In particular, I'm looking for a solution that:
Allows sharing of 'helper' type functions without necessarily being exported by anyone
Allow installation via PsGet from a local repo path
Let me go into some of the challenges I see.
First of all, as far as I can tell, PsGet does not handle module dependencies well. This implies sharing between modules is going to be a struggle. Maybe a solution to this is to avoid PsGet, and use a custom script to 'install' modules to the local module path, which might be more tolerant of dependencies and load order.
My point about not using module exports to share helper functions also seems to be an issue. The reason I can see for this is desiring aliases, helpers etc for common internal actions (needed inside useful functions), that are either useless or unsafe to expose. For example, a nice brief alias for getting the local script path (commonly used, noisier than it should be). Or I recently made a nice simple wrapper around PromptForChoice with fewer options. Maybe this whole thing isn't a real issue. But I can't help but feel that shipping a 'utils' module that exports low level functions that are useful inside real modules, but not to an end user, seems like the wrong way to go.
What I've been playing with is a small build structure that tests and then packs modules, and I want to get some code sharing possible. I've been looking for an alternative using ScriptsToProcess in the manifest, but these seem to be absolute paths, not relative.
Imagine a folder structure:
modules
utils
console_helpers.ps1
moduleA
moduleA.psm1
moduleA.psd1
moduleB
moduleB.psm1
moduleB.psd1
packed_modules
moduleA.zip
moduleB.zip
What I was considering was that you could list relative paths in each ScriptsToProcess, and then my pack phase will go and drag those relative paths in to each zip.
Is this a horrible crazy idea? Am I right that ps modules and PsGet really don't have decent dependency support? I would love to hear feedback from anyone who has looked into this space. I think the answer I'm hoping to get in rough priority might be:
Here's an example of sharing code without exposing it (probably a build/pack level solution)
Here's how to make module dependencies work nicely, using PsGet
Here's how to make module dependencies work nicely, but you can't use PsGet
Just expose everything from modules
This is a terrible idea and you're terrible
Thanks!
UPDATE as suggested by CalebB
Here's another example to illustrate what I'm trying to resolve. I find it useful to wrap up '&' style execution of commands with a wrapper function, to deal with stuff like checking exit codes etc. If i'm building half a dozen modules, many of them will want to make use of that helper (obviously).
My options today seem to be put it in a module and export it, but maybe I don't want it exported, I want more of a . source style access. And if I've got a family of modules all trying to use this stuff, the options for module dependency management are limited (PsGet limitation etc).
If I'm 'building' all the modules at once (with some decent psake and pester infrastructure), maybe I can use a hack at this point to embed scripts into my zipped modules to 'solve' all these problems?
Allows sharing of 'helper' type functions without necessarily being exported by anyone
Mhm... what is wrong with dot sourcing the scripts you need within particular module ? You could :
Keep your folder structure and symlink the desired functions into module folder.
Try to use AbsolutePath with ScriptProcess that has "relative part" in it, for example %PSScriptRoot%\..\utils (not tried in that context but generally works). If not, u can always add preprocessor to fix paths for you if it doesn't work
Delete undesired imported elements manually via function:, alias:, and var: provider.
Import extra utilities only when u use them then remove them at the end ? If the desire is that user can't see them you can encrypt them.
Here's how to make module dependencies work nicely, but you can't use PsGet
Chocolatey uses NuGet so it handles dependencies and can load from the local store. As a benefit, OneGet supports it which is something everybody will use eventually.
I've posted the solution I've come up with on github. I've rolled in a few other features I want when building modules, but the key solution for this question here uses reading and updating the psd1 of each module.
You include scripts that you want to embed in the NestedModules property of your manifest. My build phase will find each script and copy it into the module folder for packing and zipping. The manifest that ships in the package has the script paths converted to the now local file name.
I'm still not sure of this is ideal, but it seems to be a nice compromise to deal with the issues here.
A key issue I encountered along the way was that the ScriptsToProcess list is executed literally at module import time, so it is only useful for bootstrapping the import of your functionality. The NestedModules property is actually the list of additional scripts you want to be . sourced and available when your module is used.
I'm developing a web app.
If I include a jQuery plugin (or the jQuery file itself), this has to be put under my static directory, which is under SCM, to be served correctly.
Should I gitignore it, or add it, even if I don't plan on modifying anything from it?
And what about binary files (graphic resources) that might come with it?
Thanks in advance for any advice!
My view is that everything you need for your application to run correctly needs to be managed. This includes third-party code.
If you don't put it under SCM, how is it going to get deployed correctly on your production systems? If you have other ways of ensuring that, that's fine, but otherwise you run the risk that successful deployment is a matter of people remembering to do all the right things, rather than some automated low-risk "push the button" procedure.
If you don't manage it under SCM or something similar, how do you ensure that the versions you develop against and test against are the same? And that they're the same as production? Debugging an issue caused by a version difference you don't notice can be horrible.
I generally add external resources to my project directly. Doing so facilitates deployment and ensures that if someone changes the version of this file in your project, you have a clear audit history of what happened in case it causes issues in the code that you've written. Developers should know not to modify these external resources.
You could use something like git submodules, I suppose, but I haven't felt that this is worth the hassle in the past.
Binary files from external sources can be checked in to the project as well, although if they're extremely large you may want to consider a different approach.
There aren't a lot of reasons not to put external resources like jQuery into your repo:
If you pull it down from jQuery every time you check out or deploy, you have less control over which version you're using. This holds true for most third-party libraries; you probably don't want to upgrade your libraries without testing with your code to see if it breaks something.
You'll always have a complete copy of your site when you check out your repository and you won't need to go seeking resources that may have become unavailable.
For small (in terms of filesize) things like jQuery and images, I'd just add them unless you're really, really concerned about space.
It depends.
These arguments relate to having a copy of the library on your system and not pulling it from it's original location.
Arguments in favour:
It will ensure that everything needed for your project can be found in one place when someone else joins your development team. I've lost count of the number of times I've had to scramble around looking for the right versions of libraries in order to be able to get something working.
If you make any modifications to the library you can make these changes to the source controlled version so when a new version comes out you use the source control's merging tools to ensure your edits don't go missing.
Arguments against:
It could mean everyone has a copy of the library locally - unless you map the 3rd party tools to a central server.
Deploying could be problematical - again unless you map the 3rd party tools to a central server and don't include them in the deploy script.
If I want to enable a new piece of functionality to a subset of known users first, is there any automated system of framework that exists to do this?
Perhaps not directly with version control - you might be interested to read how flickr goes about selectively deploying functionality: http://code.flickr.com/blog/page/2/
And this guy talks about implementing something similar in a rails app: http://www.alandelevie.com/2010/05/19/feature-flippers-with-rails/
Most programming languages have if statements.
I don't know what "switching between them at runtime" means. You usually don't check executable code into an SCM system. There's a separate process to check out, build, package, and deploy. That's the province of continuous integration and automated builds in agile techniques.
SCM systems like Subversion allow you to have tags and branches for parallel development. You're always free to build, package, and deploy those as you see fit.
As far as I know no...
If you wanted a revision control system that had multiple versions that you could switch between. Find a SCM you like and lookup branching.
But, it sounds like you want it to me able to switch versions in the SCM programmatically during runtime. The problem with that is, for a revision control system to be able to do that it would have to be aware of the language and how it's implemented.
It would have to know how load and run the next version. For example, if it was C code it would have to dynamically compile and run it on the fly. If it was PHP it would have to magically load the script in a sandbox http server that has PHP support. Etc... In which case, it isn't possible.
You can write an app to change the version in the scm by using the command line.
To do it during runtime, that functionality has to be part of the application itself.
The best (only) way I can think of doing it is to have one common piece of code that acts like a 'bootloader', which uses a system call to checkout the correct branch based on whatever your requirements are. It then (if necessary) compiles that code, and runs it.
It's not technically 'at runtime', but it appears that way if it works.
Your first other option is something that dynamically loads code, but that's very language-dependent, and you'd need to specify.
The other is to permanently have both in the working codebase (which doubles your size if it's a full duplication), and switch at runtime. You can save a good bit of space by using objects that are shared between both branches, and things like conditional compilation to use the same source files for both targets.
I do alot of bugfixing and implementing new features for several different customers. These customers all report their bugs, change requests and new feature request into our Trac system.
Sometimes these requests result in me creating some SQL change scripts, sometimes there are Excel documents or Access databases with testdata, Word documents from the customer and so on. Alot of files that are used to fix one ticket and then can be deletede when the ticket is closed.
I usualy do this by creating folders in the filesystem like this: /customerXX/TicketNNNNN and then just dumping everything in there.
How do you organize your workfiles? Have you found some fantastic tool to do this?
I would say for scripts or files that are related to a particular ticket, the best thing to do would be to attach the file to that ticket in your issue tracking software - almost all issue trackers that I've worked with will allow you to do this. That way, you can look back and a) see exactly what you did in case something goes wrong, or b) do exactly the same thing if the issue comes up again later. That's almost certainly the best place to keep files with extra info from the customer, too (or at least the first place most people will look).
For frequently re-used scripts that aren't specific to a particular ticket, I would create a scripts/ or bin/ directory in the associated project, and keep them in there.
I also have a small handful of useful files that I keep in src/misc/ off my home directory, with things like SQL queries to get readable "explain" output out of Oracle and such, that aren't specific to any particular project. The number of these is small enough that subdirectories aren't necessary, though - I suspect if you ended up with a large number of these files, many of them could/should be moved to specific projects or your issue tracking system.
JIRA has been quite helpful for this at my site. It supports issue tracking, file attachments,and you can easily customize and categorize your projects and issues.
I use Fogbugz and I add all file to the case. I believe that no matter what application you use, The important is to keep this files for future references. If your bug-tracking tool does not let you attach file then add the files to the version control.
We use CaWeb4 and find it very easy to use for our bug tracking.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am working in a team of five. We are working on a C# application with five csprojects.
The problem is that for each csproject, each of my colleagues has their own ideas on how to reference a DLL; some would like to link in by Project reference, other would like to link in the DLL only. So each and every one of us will have our own csproject.
I want all of them to check in their csproject; but given that every copy of csproject is different, there isn't really a feasible mechanism to do that, is there? But if I don't ask them to check in their csproject, then every time they add a new file, I would have to manually edit my csproject and that's very tedious, not to mention that it beats the purpose of continuous integration.
Is there any strategy to handle this? I know it would be best to enforce a standard, but is there any other option leaving this aside?
There is a reason why the csproject content is different for everyone; not everyone has all of the five csprojects, and not everyone can have all of the 5 csprojects. So invariably some will have to end up having to reference DLLs instead of projects, and some want to reference by projects for the ease of debugging. If I were to enforce a standard, as the answers here suggest, I would have to solve this issue.
As to why we need to split into multiple csprojects, that's because we want to reuse some parts of the code for other applications, and because not everyone can have all access to the source code. It's more political than technological.
Your problem is not how to handle it with Source Control.
Your problem is that you (or management) needs to get your team to adopt a set of standards the entire team follows.
If you let everyone follow their own mish-mash of ideas and do not get team cohesion on the basics it will only end in tears...
You're almost certainly solving the wrong problem. If you fork the .csproj files to cater to invididual preferences, you are incurring additional work and introducing the likelihood of errors, for exactly the reason you describe -- every time Alice adds a file to AlicesX.csproj, Bob has to learn about this and add the same file to BobsX.csproj.
You really need to consider this as a problem of standards and team dynamics: agree on how DLLs will be referenced in the master sources, and require everyone to stick to that. If the "losing" side don't like to work that way, sure, they can use their preferred style in their private working copies. But you really only want one master source, and you want to work towards getting everybody to buy into the way the master source does it.
Per your edit: If you really, really cannot come to an agreement with your colleagues, then I would still suggest a single master, but write a little utility that the dissenters can use that converts project references to DLL references (or vice versa). .csproj files are just XML so this is pretty trivial to do. If you cannot even agree on what is going to be the repository format, then you will need to maintain parallel .csproj files, but I'd still write the utility to ensure that changes made to DllReferencingProj.csproj get copied to ProjectReferencingProj.csproj. But I still say you're just making more work and storing up more pain for yourself than if you had the squabble and got it over with: in order to function as a team, you're going to need to find some way of resolving disputes, and this is as good as test case as any.
Time to make everyone grow up and follow a standard. If you're all working on the same code you should decide together whether referencing the dll or the project is best and then stick to it. Once you guys figure this one out you can decide whether to indent 2 or 4 spaces or a tab. Then decide whether to put your curly braces on the same line as or the next line after your function declarations. I'm not even going to speak to the vagaries of Hungarian notation...
Our configuration is as follows:
Project -> copy dll to common folder
Project -> copy dll to common folder
Main Project -> Copy exe to common folder, run application from common folder
Doesn't much matter how you reference using this configuration, the dlls will be picked up from the application folder and you're golden.
Continuous integration shouldn't care about your .csproj files. I guess they're MSBUILD files? Or something?
Don't use them for CI. They're junk. They accrue garbage because they make too many things invisible. Create a clean build structure that is independent of them, you'll be thankful you did. And then only check in a project file when you're adding something, and everyone else can update/merge. You don't need to have the same or even similar project files most of the time. On my team we don't even run the same version of VS across all workstations.