An API defines that a date should be sent as iso8601, but we have a requirement to send "forever" as a date, and the standard does not seem to cover this. Can anyone suggest a better solution than Dec 31 9999? Is there a different standard that would be more appropriate?
Quoting ISO 8601:2004(E):
3.5 Expansion
By mutual agreement of the partners in information interchange, it is permitted to expand the component
identifying the calendar year, which is otherwise limited to four digits. This enables reference to dates and
times in calendar years outside the range supported by complete representations, i.e. before the start of the
year [0000] or after the end of the year [9999].
And also relevant may be section 3.7 Mutual agreement which basically says you're free to define your own representations as long as you don't interfere with the representations defined in ISO 8601. So 9999-12-32 or 9999-13-00 could be mutually agreed upon for your proposed forever value.
As to what's common practice, I'd say it depends.
I'd go for 3.7 whenever possible. But it's important to assess your role within the whole set-up. E.g. if you're using a 3rd party API within your own set of components for the sake of convenience or future compatibility, there should be no problem at all. If you're part of a bigger system and you'd have to convince tens of other system parties/components/modules/etc. I'd say it's not worth the trouble.
Also very important to check legacy code. And at least sketch out a plan on how to do the migration in case it breaks set-ups beyond belief. That could be anything from documenting your API "extension" to actually sending patches to the legacy code maintainers.
Related
I am working on a course that uses SCORM 2004 3th edition, and i have this problem. For very small amount of the people that are using the course (around 1%-1.5%) the course dose not register completion in the LMS after they are finish it. I am checking the difference between a all working cases and this 1% that did not managed to complete the course, and the only difference that I see is the primary objective. In the working ones the primary objective have "Success Status" as "passed" and on the 1% it dose not even exist.
I tried to read in several places what is the primary objective and all i understood is that it is something that is defined in the imsmanifest.xml (in my case it is not), and if it is not there the LMS will create at least one for the course. If you set the 'cmi.success_status' to passed and the 'cmi.completion_status' to "completed" the LMS will set the primary objective to 'passed' as well.
So, my question is, did I understood this correctly, or it works in totally different way. What is exactly the primary objective and is it my responsibility to somehow set this or it is the LMS that is responsible for this.
Run-time data related to objectives (cmi.objectives.n.xxx) should not be initialized for an activity’s associated SCO unless an objective ID attribute is defined in the sequencing information (imsss:primaryObjective or imsss:objective).
For example if on cloud.scorm.com I do not specify a primary objective I do not get any cmi.objectives._count. If I explicitly set a primary objective then it/they can show up
So you can define a primary objective in the imsmanifest.xml, but its possible the platform as you stated is defaulting one. I've seen this occur on a platform before and it really goofs up the logic calculating the SCO's objective scaled scores when you have a rogue objective commonly with no data. Not to say what you're encountering with the "satisfiedByMeasure".
My interpretation of what happened here was a misunderstanding/interpretation with the way the dev's implemented the Runtime Environment. There are "Global Objectives" and "Primary Objectives" but I (personal opinion) do not believe they should be adding a cmi.objective.0 unless one is physically present in your manifest, or added by 'other means' through the LMS Administration. My .02 cents is this area of the specification caused confusion which lead to some of these behaviors. Even how the LMS determines and stores these was not (again my opinion) laid out well in the specification and left room for interpretation.
The whole purpose of the Simple Sequencing and or Sequence and Navigation was to allow you (instructional designer, content developer or otherwise) the capability to bake-in a level of flow controls that (simple or complex) allow the LMS to manage the user navigation either through input (clicking on content / assets) or based on performance using rulesets.
There was a "Impact Summary" document written up to.
After X amount of months it turns out that the LMS that the client is using (SABA) is buggy and it dose have problems with SCORM 2004 (they have exactly the same problem with other courses, that are not related to mine). So what fixed my problem was, converting the course to SCORM 1.2.
Our development group is working towards building up with service catalog.
Right now, we have two groups, one to sale a product, another to service that product.
We have one particular service that calculates if the price of the product is profitable. When a sale occurs, the sale can be overridden by a manager. This sale must also be represented in another system to track various sales and the numbers must match. The parameters of profitability also change, and are different from month to month, but a sale may be based on the previous set of parameters.
Right now the sale profitability service only calculates the profit, it also provides a RESTful URI.
One group of developers has suggested that the profitability service also support these "manager overrides" and support a date parameter to calculate based on a previous date. Of course the sales group of developers disagree. If the service won't support this, the servicing developers will have to do an ETL between two systems for each product(s), instead of just the profitability service. Right now since we do not have a set of services to handle this, production support gets the request and then has to update the 1+ systems associated for that given product.
So, if a service works for a narrow slice, but an exception based business process breaks it, does that mean the boundaries of the service are incorrect and need to account for the change in business process?
Two, does adding a date parameter extend the boundary of the service too much, or should it be excepted that if the service already has the parameters, it would also have a history of parameters as well? At this moment, we don't not have a service that only stores the parameters as no one has required a need for it.
If there is any clarification needed before an answer can be given, please let me know.
I think the key here is: How much pain would be incurred by both teams if and ETL was introduced between to the two?
Not that I think you're over-analysing this, but if I may, you probably have an opinion that adding a date parameter into the service contract is bad, but also dislike the idea of the ETL.
Well, strategy aside, I find these days my overriding instinct is to focus less on the technical ins and outs and more on the purely practical.
At the end of the day, ETL is simple, reliable, and relatively pain free to implement, however it comes with side effects. The main one is that you'll be coupling changes to your service's db schema with an outside party, which will limit options to evolve your service in the future.
On the other hand allowing consumer demand to dictate service evolution is easy and low-friction, but also a rocky road as you may become beholden to that consumer at the expense of others.
Another possibility is to allow the extra service parameters to be delivered to the consumer via a message, rather then across the same service. This would allow you to keep your service boundary intact and for the consumer to hold the necessary parameters local to themselves.
Apologies if this does not address your question directly.
Today I've been presented with a fun challenge and I want your input on how you would deal with this situation.
So the problem is the following (I've converted it to demo data as the real problem wouldn't make much sense without knowing the company dictionary by heart).
We have a decision table that has a minimum of 16 conditions. Because it is an impossible feat to manage all of them (2^16 possibilities) we've decided to only list the exceptions. Like this:
As an example I've only added 10 conditions but in reality there are (for now) 16. The basic idea is that we have one baseline (the default) which is valid for everyone and all the exceptions to this default.
Example:
You have a foreigner who is also a pirate.
If you go through all the exceptions one by one, and condition by condition you remove the exceptions that have at least one condition that fails. In the end you'll end up with the following two exceptions that are valid for our case. The match is on the IsPirate and the IsForeigner condition. But as you can see there are 2 results here, well 3 actually if you count the default.
Our solution
Now what we came up with on how to solve this is that in the GUI where you are adding these exceptions, there should run an algorithm which checks for such cases and force you to define the exception more specifically. This is only still a theory and hasn't been tested out but we think it could work this way.
My Question
I'm looking for alternative solutions that make the rules manageable and prevent the problem I've shown in the example.
Your problem seem to be resolution of conflicting rules. When multiple rules match your input, (your foreigner and pirate) and they end up recommending different things (your cangetjob and cangetevicted), you need a strategy for resolution of this conflict.
What you mentioned is one way of resolution -- which is to remove the conflict in the first place. However, this may not always be possible, and not always desirable because when a user adds a new rule that conflicts with a set of old rules (which he/she did not write), the user may not know how to revise it to remove the conflict.
Another possible resolution method is prioritization. Mark a priority on each rule (based on things like the user's own authority etc.), sort the matching rules according to priority, and apply in ascending sequence of priority. This usually works and is much simpler to manage (e.g. everybody knows that the top boss's rules are final!)
Prioritization may also be used to mark a certain rule as "global override". In your example, you may want to make "IsPirate" as an override rule -- which means that it overrides settings for normal people. In other words, once you're a pirate, you're treated differently. This make it very easy to design a system in which you have a bunch of normal business rules governing 90% of the cases, then a set of "exceptions" that are treated differently, automatically overriding certain things. In this case, you should also consider making "?" available in the output columns as well.
One other possible resolution method is to include attributes in each of your conditions. For example, certain conditions must have no "zeros" in order to pass (? doesn't matter). Some conditions must have at least one "one" in order to pass. In other words, mark each condition as either "AND", "OR", or "XOR". Some popular file-system security uses this model. For example, CanGetJob may be AND (you want to be stringent on rights-to-work). CanBeEvicted may be OR -- you may want to evict even a foreigner if he is also a pirate.
An enhancement on the AND/OR method is to provide a threshold that the total result must exceed before passing that condition. For example, putting CanGetJob at a threshold of 2 then it must get at least two 1's in order to return 1. This is sometimes useful on conditions that are not clearly black-and-white.
You can mix resolution methods: e.g. first prioritize, then use AND/OR to resolve rules with similar priorities.
The possibilities are limitless and really depends on what your actual needs are.
To me this problem reminds business rules engine where there is no known algorithm to define outputs from inputs (e.g. using boolean logic) but the user (typically some sort of administrator) has to define all or some the logic itself.
This might sound a bit of an overkill but OTOH this provides virtually limit-less extension capabilities: you don't have to code any new business logic, just define a new rule set.
As I understand your problem, you are looking for a nice way to visualise the editing for these rules. But this all depends on your programming language and the tool you select for this. Java, for example, has JBoss Drools. Quoting their page:
Drools Guvnor provides a (logically
centralized) repository to store you
business knowledge, and a web-based
environment that allows business users
to view and (within certain
constraints) possibly update the
business logic directly.
You could possibly use this generic tool or write your own.
Everything depends on what your actual rules will look like. Rules like 'IF has an even number of these properties THEN' would be painful to represent in this format, whereas rules like 'IF pirate and not geek THEN' are easy.
You can 'avoid the ambiguity' by stating that you'll always be taking the first actual match, in other words your rules have a priority. You'd then want to flag rules which have no effect because they are 'shadowed' by rules higher up. They're not hard to find, so it's something your program should do.
Your interface could also indicate groups of rules where rules within the group can be in any order without changing the outcomes. This will add clarity to what the rules are really saying.
If some of your outputs are relatively independent of the others, you will also get a more compact and much clearer table by allowing question marks in the output. In that design the scan for first matching rule is done once for each output. Consider for example if 'HasChildren' is the only factor relevant to 'Can Be Evicted'. With question marks in the outputs (= no effect) you could be halving the number of exception rules.
My background for this is circuit logic design, not business logic. What you're designing is similar to, but not the same as, a PLA. As long as your actual rules are close to sum of products then it can work well. If your rules aren't, for example the 'even number of these properties' rule, then the grid like presentation will break down in a combinatorial explosion of cases. Your best hope if your rules are arbitrary is to get a clearer more compact presentation with either equations or with diagrams like a circuit diagram. To be avoided, if you can.
If you are looking for a Decision Engine with a GUI, than you can try this one: http://gandalf.nebo15.com/
We just released it, it's open source and production ready.
You probably need some kind of inference engine. Think about doing it in prolog.
I need to get the index of a exchange like NASDAQ rather than the price of a specific stock in that exchange. I suppose that Finance::Quote will come to the rescue , but after a quick go-through of the document, I find it the way one can use the module for query is like:
%info = $q->fetch("australia","CML")
which means both the exchange and the stock should be specified in the query. then the question is: does the index itself can be treated as a stock and has a symbol name which can be used in the query?
Of course, if you have other way can meet my needs rather than using Finance::Quote, please feel free to write down your solution.
The problem with your question is that you are assuming that there is just one index for a particular exchange. Whilst there may well be a particular index that is dominant (eg. for stocks primarily traded on the London Stock Exchange, the FTSE 100 might be considered the main index; similarly for the NYSE it would be the Dow Jones Industrial Average) other exchanges may have a less clear leader in their collection of associated indicies (eg. for the Australian Stock Exchange, the S&P/ASX 200 and the All Ordinaries index are both frequently quoted side-by-side in the evening broadcast news).
Symbology of stocks, indicies, option chains, futures, etc is quite a complicated field in financial IT. Many of the symbology standards are backed by a data vendor (eg. Reuters, Bloomberg) and use of their standards requires a commercial license. On the other hand there are other efforts aiming to make symbology more open (Bloomberg themselves are behind one of these efforts).
I'm not familiar with the data sources of the Finance::Quote package you reference, but if you are serious about accessing market data (ie. prepared to pay for it) but don't need the cost/complexity/speed of a solution from Reuters, Bloomberg, etc, you could do alot worse than check out what Xignite offers in the way of market data accessible via web services.
the symbol for the nasdaq composite is "^IXIC". For nyse composite it's "^NYA".
each quote provider might have a different syntax though.
I am currently in bureaucratic hell at my company and need to define what constitutes the different levels of software change to our test programs. We have a rough practice that we follow internally, but I am looking for a standard (if it exists) to reference in our Quality system. I recognize that systems may vary greatly between developers, but ultimately I am looking for a "best practice" guide to what constitutes a major change, a minor change etc. I would like to reference a published doc in my submission to our quality system for ISO purposes if possible.
To clarify the software developed at my company is used internally for test automation of Semi-Conductors. We are not selling this code and versioning is really for record keeping only. We are using the x.y.z changes to effect the level of sign-off and approval needed for release.
A good practice is to use 3 level revision numbers:
x.y.z
x is the major
y is the minor
z are bug fixes
The important thing is that two different software versions with the same x should have binary compatibility. A software version with a y greater than another, but the same x may add features, but not remove any. This ensures portability within the same major number. And finally z should not change any functional behavior except for bug fixes.
Edit:
Here are some links to used revision-number schemes:
http://en.wikipedia.org/wiki/Software_versioning
http://apr.apache.org/versioning.html
http://www.advogato.org/article/40.html
I would add build number to the x.y.z format:
x.y.z.build
x = major feature change
y = minor feature change
z = bug fixes only
build = incremented every time the code is compiled
Including the build number is crucial for internal purposes where people are trying to figure out whether or not a particular change is in the binaries that they have.
to enlarge on what #lewap said, use
x.y.z
where z level changes are almost entirely bug fixes that don't change interfaces or involve external systems
where y level changes add functionality and may change the UI/API interface in addition to fixing more serious bugs that may involve external systems
where x level changes involve anything from a complete rewrite/redesign to just changing the database structures to changing databases (i.e. from Oracle to SQLServer) - in other words anything that isn't a drop in change that requires a "port" or "conversion" process
I think it might differ if you are working on an internal software vs a external software (a product).
For internal software it will almost never be a problem to use a formally defined scheme. However for a product the version or release number is in most cases a commercial decision that does not reflect any technical or functional criteria.
In our company the x.y in an x.y.z numbering scheme is determined by the marketing boys and girls. The z and the internal build number are determined by the R&D department and track back into our revision control system and are related to the sprint in which it was produced (sprint is the Scrum term for an iteration).
In addition formally defining some level of compatability between versions and releases could cause you to very rapidly move up or to hardly move at all. This might not reflect added functionality.
I think that best approach for you to take hen explaining this to your co-workers is by examples drawn from well known and succesful software packages, and the way they approach major and minor releases.
The first thing I would say is that the major.minor dot notation for releases is a relatively recent invention. For example, most releases of UNIX actually had names (which sometimes included a meaningless number) rather than version numbers.
But assuming you want to use major.minor, numbering, then the major number indicates a version that is basically incompatible with much that went before. Consider the change from Windows 2,0 to 3.0 - most 2,0 applications simply didn't fit in with the new overlapped windows in Windows 3,0. For less all-encompassing apps, a radical change in file formats (for example) could be a reason for a major version change - WP &n graphic apps often work this way.
The other reason for a major version number change is that the user notices a difference. Once again this was true for the change from Windows 2.0 to 3.0 and was responsible forv the latters success. If your app looks very different, that;s a major change.
A for the minor version number, this is typically used to indicate a chanhe that actually is quite major, but that won't be noticeable to the user. For example, the differences internally between Win 3.0 and Win 3.1 were actually quite major, but the interface stayed the same.
Regarding the third version number, well few people know hat it really means and fewer care. For example, in my everyday work I use the GNU C++ compiler version 3.4.5 - how does this differ from 3.4.4 - I haven't a clue!
As I said before in an answer to a similar question: The terminology used is not very precise. There is an article describing the five relevant dimensions. Data management tools for software development don't tend to support more than three of them consistently at the same time. If you want to support all five you have to describe a development proces:
Version (semantics: modification)
View (semantics: equivalence, derivation)
Hierarchy (semantics: consists of)
Status (semantics: approval, accessibility)
Variant (semantics: product variations)
Peter van den Hamer and Kees Lepoeter (1996) Managing Design Data: The Five Dimensions of CAD Frameworks, Configuration Management, and Product Data Management, Proceedings of the IEEE, Vol. 84, No. 1, January 1996