What is the best way to split big amounts of content between several linguists for them to work simultaneously? - translators

An LSP wants to use Lokalise for a specific client and is asking how they can split work between translators (with the same target language) and be able to work at the same time.
I have not tested but my thought was creating a task for each individual translator on specific keys to distribute.

Related

How are searches implemented in a Flat File CMS

Flat file CMS's don't use databases. So how are searches implemented? Is searching more or less computationally expensive with this type of setup compared to a database powered search?
The problem with a static site and search together is that one is by definition static, while the other is highly dynamic. So out of the box there is no simple way to make the two live happily together.
Flat file CMS arn't static websites. While parsing files is more costly than parsing databases (usually?), a search functionality can easily be provided by the underlying CMS. Look for plugins that can provide what you want.
However, there is some non trivial solutions that can achieve what you want, depending on your infrastructure and your volumetry and if you site can achieve server side computations or not (grav can, gatsby and hugo can't).
The simplest way to do it is to create an index of all your content in a special file, then load that and do the search client side. You can even use already made package to speedup dev time on this option. (for example: https://www.npmjs.com/package/react-fuzzy-search )
The pro is that it's quite trivial to do. the cons are that the index will get quite big with large side and all the search is done client side (so, maybe a long waiting time for the user if the index is large enough). This solution will also NOT scale well.
Another way to do it is to use a search service (as a SAAS or on your own premises) to externalize the search functionality. Basically this service run a your server, will have a way to index your content (via an API) and search ie (via an API). Just make sure the search API is public and you can query it in realtime from client side.
This solution scales really well because these sort of services are made from the ground up to scale ! However the setup costs are really high, and not worth it if you don't plan to scale to millions of pages.

how to maintain multiple components for multiple client for multiple features?

Basically my project is product based.
Once we developed a project and catch the multiple client and deploy the application based on their needs.
But We decided to put the new features and project dependent modules are as component.
Now my application got many number of customer.
Every customer needs a different features based on the component.
But we have centralized component for all client . we move the components additional feature to client specific folder and deploy.
My problem is , I am unable maintain the components features for multiple client.
My component feature code is increased and I am unable to track the client features.
Is there any solution for maintaining the multiple component features for multiple client ?
I've worked for a couple of companies in a similar space - product software but very heavily customised.
Essentially there is a decision the company needs to make - are you a product company (that is you ship broadly the same to every client) or are you a bespoke company. At the moment it sounds like they're between two stools and wanting the economies of being a product company with the ability to meet specific client demands the way a bespoke software company can.
Assuming the company wants to be a product software company, unless there are specific technical reasons why you can't, you need to move to a single code base with the modifications for each customer being handled through customisable options (i.e. flags saying how this particular situation is being handled, whether this feature is available and so on).
These can be set at run time (so they can be changed as the client wants - think options in Word or Excel), or build time (so code is included / excluded when you do the build), but the key things is that every client has to be pulled from the same code base.
But this needs to be agreed with the business as it limits what they can sell - every change they sell has to fit into an overall vision which can be accommodated by the single product.
The alternative is that you're essentially producing bespoke software for each client (that is coded specifically for what they want) but using many common libraries. That's fine and allows you to produce something which is exactly what they want but in the end it is going to be more work and the business needs to understand and cost for that.
We actually do a bit of both - there is a server product which is identical for all clients, and then web and mobile clients which are specific to them (in the case of mobile you can't have lots of dead code on the device - the web stuff is historic and will be moving to a standard product for all clients).
Good luck though, it's a difficult problem with no easy solution.
You are essentially talking about software product lines (SPLs): variations from a common base. Since you already package your features as components, you need a specialized tool to manage such variations.
You can then build a complete custom application based on a configuration that is unique to any given customer. Easier said than done, of course.
A model-driven software development(MDSD) approach can help a lot on this task. One such system that can support this development setup is ABSE, an emerging MDSD approach that among other things, can implement a software product line (info at http://www.abse.info - Disclaimer: I am the ABSE project lead). There is no product yet though. An alpha preview is coming.
Again, I know some companies that, using an MDSD coupled with code generation, have achieved what I understand you want: products that are half pre-packaged, half custom.

How to manage multiple clients with slightly different business rules? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have written a software package for a particular niche industry. This package has been pretty successful, to the extent that we have signed up several different clients in the industry, who use us as a hosted solution provider, and many others are knocking on our doors. If we achieve the kind of success that we're aiming for, we will have literally hundreds of clients, each with their own web site hosted on our servers.
Trouble is, each client comes in with their own little customizations and tweaks that they need for their own local circumstances and conditions, often (but not always) based on local state or even county legislation or bureaucracy. So while probably 90-95% of the system is the same across all clients, we're going to have to build and support these little customizations.
Moreover, the system is still very much a work in progress. There are enhancements and bug fixes happening continually on the core system that need to be applied across all clients.
We are writing code in .NET (ASP, C#), MS-SQL 2005 is our DB server, and we're using SourceGear Vault as our source control system. I have worked with branching in Vault before, and it's great if you only need to keep 2 or 3 branches synchronized - but we're looking at maintaining hundreds of branches, which is just unthinkable.
My question is: How do you recommend we manage all this?
I expect answers will be addressing things like object architecture, web server architecture, source control management, developer teams etc. I have a few ideas of my own, but I have no real experience in managing something like this, and I'd really appreciate hearing from people who have done this sort of thing before.
Thanks!
I would recommend against maintaining separate code branches per customer. This is a nightmare to maintain working code against your Core.
I do recommend you do implement the Strategy Pattern and cover your "customer customizations" with automated tests (e.g. Unit & Functional) whenever you are changing your Core.
UPDATE:
I recommend that before you get too many customers, you need to establish a system of creating and updating each of their websites. How involved you get is going to be balanced by your current revenue stream of course, but you should have an end in mind.
For example, when you just signed up Customer X (hopefully all via the web), their website will be created in XX minutes and send the customer an email stating it's ready.
You definitely want to setup a Continuous Integration (CI) environment. TeamCity is a great tool, and free.
With this in place, you'll be able to check your updates in a staging environment and can then apply those patches across your production instances.
Bottom Line: Once you get over a handful of customers, you need to start thinking about automating your operations and your deployment as yet another application to itself.
UPDATE: This post highlights the negative effects of branching per customer.
Our software has very similar requirements and I've picked up a few things over the years.
First of all, such customizations will cost you both in the short and long-term. If you have control over it, place some checks and balances such that sales & marketing do not over-zealously sell customizations.
I agree with the other posters that say NOT to use source control to manage this. It should be built into the project architecture wherever possible. When I first began working for my current employer, source control was being used for this and it quickly became a nightmare.
We use a separate database for each client, mainly because for many of our clients, the law or the client themselves require it due to privacy concerns, etc...
I would say that the business logic differences have probably been the least difficult part of the experience for us (your mileage may vary depending on the nature of the customizations required). For us, most variations in business logic can be broken down into a set of configuration values which we store in an xml file that is modified upon deployment (if machine specific) or stored in a client-specific folder and kept in source control (explained below). The business logic obtains these values at runtime and adjusts its execution appropriately. You can use this in concert with various strategy and factory patterns as well -- config fields can contain names of strategies etc... . Also, unit testing can be used to verify that you haven't broken things for other clients when you make changes. Currently, adding most new clients to the system involves simply mixing/matching the appropriate config values (as far as business logic is concerned).
More of a problem for us is managing the content of the site itself including the pages/style sheets/text strings/images, all of which our clients often want customized. The current approach that I've taken for this is to create a folder tree for each client that mirrors the main site - this tree is rooted at a folder named "custom" that is located in the main site folder and deployed with the site. Content placed in the client-specific set of folders either overrides or merges with the default content (depending on file type). At runtime the correct file is chosen based on the current context (user, language, etc...). The site can be made to serve multiple clients this way. Efficiency may also be a concern - you can use caching, etc... to make it faster (I use a custom VirtualPathProvider). The largest problem we run into is the burden of visually testing all of these pages when we need to make changes. Basically, to be 100% sure you haven't broken something in a client's custom setup when you have changed a shared stylesheet, image, etc... you would have to visually inspect every single page after any significant design change. I've developed some "feel" over time as to what changes can be comfortably made without breaking things, but it's still not a foolproof system by any means.
In my case I also have no control other than offering my opinion over which visual/code customizations are sold so MANY more of them than I would like have been sold and implemented.
This is not something that you want to solve with source control management, but within the architecture of your application.
I would come up with some sort of plugin like architecture. Which plugins to use for which website would then become a configuration issue and not a source control issue.
This allows you to use branches, etc. for the stuff that they are intended for: parallel development of code between (or maybe even over) releases. Each plugin becomes a seperate project (or subproject) within your source code system. This also allows you to combine all plugins and your main application into one visual studio solution to help with dependency analisys etc.
Loosely coupling the various components in your application is the best way to go.
As mention before, source control does not sound like a good solution for your problem. To me it sounds that is better yo have a single code base using a multi-tenant architecture. This way you get a lot of benefits in terms of managing your application, load on the service, scalability, etc.
Our product using this approach and what we have is some (a lot) of core functionality that is the same for all clients, custom modules that are used by one or more clients and at the core a the "customization" is a simple workflow engine that uses different workflows for different clients, so each clients gets the core functionality, its own workflow(s) and some extended set of modules that are either client specific or generalized for more that one client.
Here's something to get you started on multi-tenancy architecture:
Multi-Tenant Data Architecture
SaaS database tenancy patterns
Without more info, such as types of client specific customization, one can only guess how deep or superficial the changes are. Some simple/standard approaches to consider:
If you can keep a central config specifying the uniqueness from client to client
If you can centralize the business rules to one class or group of classes
If you can store the business rules in the database and pull out based on client
If the business rules can all be DB/SQL based (each client having their own DB
Overall hard coding differences based on client name/id is very problematic, keeping different code bases per client is costly (think of the complete testing/retesting time required for the 90% that doesn't change)...I think more info is required to properly answer (give some specifics)
Layer the application. One of those layers contains customizations and should be able to be pulled out at any time without affect on the rest of the system. Application- and DB-level "triggers" (quoted because they may or many not employ actual DB triggers) that call customer-specific code or are parametrized with customer keys) are very helpful.
Core should never be customized, but you must layer it in somewhere, even if it is simplistic web filtering.
What we have is a a core datbase that has the functionality that all clients get. Then each client has a separate database that contains the customizations for that client. This is expensive in terms of maintenance. The other problem is that when two clients ask for a simliar functionality, it is often done differnetly by the two separate teams. There is currently little done to share custiomizations between clients and make common ones become part of the core application. Each client has their own application portal, so we don't have the worry about a change to one client affecting some other client.
Right now we are looking at changing to a process using a rules engine, but there is some concern that the perfomance won't be there for the number of records we need to be able to process. However, in your circumstances, this might be a viable alternative.
I've used some applications that offered the following customizations:
Web pages were configurable - we could drag fields out of view, position them where we wanted with our own name for the field label.
Add our own views or stored procedures and use them in: data grids (along with an update proc) and reports. Each client would need their own database.
Custom mapping of Excel files to import data into system.
Add our own calculated fields.
Ability to run custom scripts on forms during various events.
Identify our own custom fields.
If you clients are larger companies, you're almost going to need your own SDK, API's, etc.

Is there any form of Version Control for LSL?

Is there any form of version control for Linden Scripting Language?
I can't see it being worth putting all the effort into programming something in Second Life if when a database goes down over there I lose all of my hard work.
Unfortunately there is no source control in-world. I would agree with giggy. I am currently moving my projects over to a Subversion (SVN) system to get them under control. Really should have done this a while ago.
There are many free & paid SVN services available on the net.
Just two free examples:
http://www.sourceforge.net
http://code.google.com
You also have the option to set one up locally so you have more control over it.
Do a search on here for 'subversion' or 'svn' to learn more about how to set one up.
[edit 5/18/09]
You added in a comment you want to backup entire objects. There are various programs to do that. One I came across in a quick Google search was: Second Inventory
I cannot recommend this or any other program as I have not used them. But that should give you a start.
[/edit]
-cb
You can use Meerkat viewer to backupt complete objects. or use some of the test programas of libopenmetaverse to backup in a text environment. I think you can backup scripts from the inventory with them.
Jon Brouchoud, an architect working in SL, developed an in-world collaborative versioning system called Wikitree. It's a visual SVN without the delta-differencing that occurs in typical source code control systems. He announced that it was being open sourced in http://archvirtual.com/2009/10/28/wiki-tree-goes-open-source/#.VQRqDeEyhzM
Check out the video in the blog post to see how it's used.
Can you save it to a file? If so then you can use just about anything, SVN, Git, VSS...
There is no good source control in game. I keep meticulous version information on the names of my scripts and I have a pile of old versions of things in folders.
I keep my source out of game for the most part and use SVN. LSLEditor is a decent app for working with the scripts and if you create a solution with objects, it can emulate alot of the in game environment. (Giving Objects, reading notecards etc.) link text
I personally keep any code snippets that I feel are worth keeping around on github.com (http://github.com/cylence/slscripts).
Git is a very good source code manager for LSL since its commits work line-by-line, unlike other SCM's such as Subversion or CVS. The reason this is so crucial is due to the fact that most Second Life scripts live in ONE FILE (since they can't call each other... grrr). So having the comparison done on the file level is not nearly as effective. Comparing line by line is perfect for LSL. With that said, it also (alike SourceForge and Google Code) allows you to make your code publicly viewable (if you so choose) and available for download in a compressed file for easier distribution.
Late reply, I know, but some things have changed in SecondLife, and some things, well, have not. Since the Third Party Viewer policy still keeps a hard wall up against saving and loading objects between viewer and system, I was thinking about another possibility so far completely overlooked: Bots!
Scripted agents, AKA Bots, have all usual avatar actions available to them. Although I have never seen one used as an object repository, there is no reason you couldn't create one. Logged in as a separate account the agent can be wherever you want automatically or by command, then collect any or all objects you are working on at set intervals or by command, and anything they have collected may be given to you or collaborators.
I won't say it's easy to script an agent, and couldn't even speak for making an extension to a scripted agent myself, but if you don't want to start from scratch there is an extensive open source framework to build on, Corrade. Other bot services don't seem to list 'object repository' among their abilities either but any that support CasperVend must already provide the ability to receive items on request.
Of course the lo-fi route, just regularly taking a copy and sending the objects to a backup avatar, may still be a simple backup solution for one user. Although that does necessitate logging in as the other account either in parallel or once every 20 or-so items to be sure they are being received and not capped by the server. This process cannot rename the items or sort them automatically like a bot may. Identically named items are listed in inventory as most recent at the top but this is a mess when working with multiples of various items.
Finally, there is a Coalesce feature for managing several items as one in inventory. This is currently not supported for sending or receiving objects, but in the absence of a bot, can make it easier to keep track of projects you don't wish to actually link as one item. (Caveat; don't rezz 'no-copy' coalesced items near 'no-build' land parcels, any that cannot be rezzed are completely lost)

What's the benefit of Connectedness?

What is the benefit of Connectedness as defined by Resource Oriented Architecture (ROA)? The way I understand it, the crux of Connectedness is the ability to crawl the entire application state using only the root URIs.
But how useful is that really?
For example, imagine that HTTP GET http://example.com/users/joe returns a link to http://examples.com/uses/joe/bookmarks.
Unless you're writing a dumb web crawler (and even then I wonder), you still need to teach the client what each link means at compile-time. That is, the client needs to know that the "bookmarks URI" returns a URI to Bookmark resources, and then pass control over to special Bookmark-handling algorithms. You can't just pass links blindly to some general client method. Since you need this logic anyway:
What's the difference between the client figuring out the URI at runtime versus providing it at compile-time (making http://example.com/users/bookmarks a root URI)?
Why is linking using http://example.com/users/joe/bookmarks/2 preferred to id="2"?
The only benefit I can think of is the ability to change the path of non-root URIs over time, but this breaks cached links so it's not really desirable anyway. What am I missing?
You are right that changing Uris is not desirable but it does happen and using complete Uris instead of constructing them makes change easier to deal with.
One other benefit is that your client application can easily retrieve resources from multiple hosts. If you allowed your client to build the URI's the client would need to know on which host certain resources reside. This is not a big deal when all of the resources live on a single host but it becomes more tricky when you are aggregating data from multiple hosts.
My last thought is that maybe you are oversimplifying the notion of connectedness by looking at it as a static network of links. Sure the client needs to know about the possible existence of certain links within a resource but it does not necessarily need to know exactly what are the consequences of following that link.
Let me try an give an example: A user is placing an order for some items and they are ready to submit their cart. The submit link may actually go to two different places depending on whether the order will be delivered locally or internationally. Maybe orders over a certain value need to go through an extra step. The client just knows that it has to follow the submit link, but it does not have compiled in knowledge of where to go next. Sure you could build a common "next step" type of resource so the client could have this knowledge explicitly but by having the server deliver the link dynamically you introduce a lot less client-server coupling.
I think of the links in resources as placeholders for what the user could choose to do. Who will do the work and how it will be done is determined by what uri the server attaches to that link.
Its easier to extend, and you could write small apps and scripts to work along with the core application fairly easily.
Added: Well the whole point starts with the idea that you don't specify at compile-time how to convert URIs to uids in a hardcoded fashion, instead you might use dictionaries or parsing to do that, giving you a much more flexible system.
Then later on say someone else decides to change the URI syntax, that person could write a small script that translates URIs without ever touching your core Application. Another benefit is if your URIs are logical other users, even within a corporate scenario, can easily write Mash-ups to make use of your system, without touching your original App or even recompiling it.
Of course the counter side to the whole argument is that it'll take you longer to implement a URI based system over a simple UID system. But if your App will be used by others in a regular fashion, that initial time investment will greatly payback (it could be said to have a good extensibility based ROI).
Added: And another point which is a mater of tastes to some degree is the URI itself will be a better Name, because it conveys a logical and defined meaning
I'll add my own answer:
It is far easier to follow server-provided URIs than construct them yourself. This is especially true as resource relationships become too complex to be expressed in simple rules. It's easier to code the logic once in the server than re-implement it in numerous clients.
The relationship between resources may change even if individual resource URIs remain unchanged. For example, imagine Google Maps indexes their map tiles from 0 to 100, counting from the top-left to the bottom-right of the screen. If Google Maps were to change the scale of their tiles, clients that calculate relative tile indexes would break.
Custom IDs identify a resource. URIs go a step further by identifying how to retrieve the resource representation. This simplifies the logic of read-only clients such as web-crawlers or clients that download opaque resources such as video or audio files.