Unit Conversion Library - Objective C [closed] - iphone

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Does anyone know of a good unit conversion library that works with Objective-C? Part of a small app that I'm working on requires me to take two items that are for sale and compare their prices. This means that I'll have to convert the quantities to a base unit for a valid comparison.
For example, if I have one item that can be purchased for $1 per pint and another that costs $3 per gallon, I need to be able to convert these to a common base unit.
Rather than re-invent the wheel, is there a library out there that can do the base-unit conversion for me?
Thanks!

I've just pushed my Objective-C unit conversion library to GitHub: https://github.com/HiveHicks/HHUnitConverter. Take a look at HHUnitConverter-MacOSX-Example for examples on how to use it.

The main purpose of a programming library is to abstract away general purpose, reusable code, into a higher level unit where you won't have to care about the details. But this is an area where details probably matter to your project.
Doing math for unit conversions involves a lot of trade-offs between accuracy and performance. If you use native floating point types, you need to avoid situations where a Very Large Number is added to a very small number: the small number may completely vanish. If you use a custom numerical representation, math will be an order of magnitude slower, which may or may not be a problem depending on how much math your application is doing.
Also, choosing that "common base unit" for comparisons is 1,000 times easier if you know what context you are working in. Given your example, it seems like pints or ounces or gallons or even liters might make sense. But if you were working with megaliters, a library that converted everything down to liters might lose a lot of precision. The library would either need to choose a default and be less generic, or allow changing the common unit, which would make it much more complicated (and therefore bug prone).
So, while I can't authoritatively answer your question and say no such library exists, that's why I think you haven't found one.

If you have a small number of conversions, you are best rolling your own. Instead of converting to base unit and back, you could have the direct conversion factors (pints->gallons, instead of pints->liters->gallons)
If you can bridge to python, there are many libraries available. Unum and PhysicalQuantities are two.
You could also have google do the work if you have internet connectivity, although there is probably a max limit on queries.

MKUnits is what you look for. This library is very popular among iOS developers. It is available on CocoaPods.
It provides units of measurement of physical quantities and simplifies manipulation of them. Unit conversion is very precise and straightforward process. You can easily extend it by adding your own units in no time.

I wrote an Objective-c units of measure library called UnitsKit. It does more than your basic conversion, for example it handles multiplication of units. Check it out.

Related

R/exams: Open-ended questions with exams2moodle

My goal is to create a question using R/exams and Moodle including a few plots generated in the Rmd exercise file. The students should describe the plots verbally and then the exercise is graded manually.
Is it possible to use exams2moodle to create such an open-ended free text question for Moodle? There is no extype for it. In the documentation the only hint is:
"In order to generate free text questions in moodle one may specify extra parameters via \exextra. Currently the following options are supported:".
I have tried to add \exextra parameters to the metainformation, but it did not change anything.
A worked example can be found in the essayreg exercise shipped within the package, see: http://www.R-exams.org/templates/essayreg/
And you are right that this is not very well documented. The reason is that we have used somewhat different exextra tags for the Moodle export and for the QTI 2.1 export. We habe to improve and unify that in one of the next R/exams versions.
Also, another pointer, in case this is useful to anyone reading the question: Another useful strategy for asking about the interpretation of (statistical) graphics is in multiple-choice format. Let the participants judge some statements about the graphic that can either be approximately correct or clearly wrong. Of course, with open-ended question you can catch more nuances but with multiple-choice questions you can automatically assess a much larger number of participants. Or participants can self-assess in practice quizzes etc. Examples for this are the boxplots and scatterplot exercises:
http://www.R-exams.org/templates/boxplots/
http://www.R-exams.org/templates/scatterplot/

Is there a rule of thumb for how granular an object should be in OO programming? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm in school learning OO programming, and for the next few months, every assignment involves dice games and word games like jumble and hangman. Each assignment has us creating a new class for these variables; HangmanWordArray, JumbleWordArray, etc. In the interest of reusability, I want to create a class (or series of classes) that can be re-used for my assignments. I'm having a hard time visualizing what that would look like, and hope my question makes sense...
Let's assume that the class(es) will contain properties with accessors and mutators, and methods to return the various objects...a word, a letter, a die roll. Is there a rule of thumb for how to organize these classes?
Is it best to keep one object per class? Or group all the objects in a single class because they're all related as "stuff I need for assignments?" Or group by data type, so all the numeric objects are in one class and all the strings in another?
I guess I'm grappling with how a programmer, in the real world, makes decisions about how to group objects within a class or series of classes, and some of the questions and thought processes I should be using to frame this type of design scenario.
Realistically, it varies from project to project. There is no guaranteed 'right way' to group anything; it all depends on your needs. What it comes down to is manageability, meaning how easily you can read and update old code. If you can contain all your games in a single 'games' class, then there's nothing wrong with doing it. However, if your games are very complicated with many subs and variables, perhaps moving them all to their own class would be easier to manage.
That being said, there are ways to logically group items. For instance if you have a lot of solo functions that are used for manipulation (char to string, string to int, html encode/decode, etc.), you may decide to create a 'helper functions' class to hold them all. Similarly, if your application uses a database connection, you may create a class to hold and manage a shared connection as well as have methods for getting query results and executing non-queries.
Some people try to break things down to much. For example, instead of having the database core mentioned above, they might create one class to create and manage the database connection. They will create another class to then use the connection class to handle queries. Not that this method won't work, but it may become very difficult to manage when items are split up too small.
Without knowing exactly what you are doing, there's no way to tell you how to do it. If you reuse the same methods in each project, then perhaps you can place them somewhere that they can be shared. The best way I found to figuring out what works best is just to try it out and see how it responds!
What I see people doing is breaking down their objects and methods until each method is just a handful of code; if any method exceeds a page of code, they will try to break down the object structure further in order to shorten things up.
I personally have no objection to long methods, as long as they are readable. I think a "one-page limit" tends to create too much granularity, and risks more confusion rather than less. But this seems to be the current fashion.
Just reporting what I'm seeing in the wild.

How bad is SLOC (source lines of code) as a metric? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are documenting our software development process. For technical people, this is pretty easy: iterative development with internal milestones every four weeks, external every 3 months.
However, the purpose of this exercise is to expose things for our project management in terms that they can understand. Specifically, these non-technical managers need metrics that they can understand.
I understand our options for metrics well and have proposed a whole set (requirements met and actual costs vs. budgeted costs are two of my favorites). However, we do have some old hands involved and they tend to hang onto metrics like SLOC.
I understand the temptation of SLOC: it seems easy for non-software people to understand and it seems like the closest analog of a physical thing (it's just like counting punched cards back in the old days!).
So here's the question: how can I explain the dangers of SLOC to a non-technical person?
Here's some concrete motivation: we work on a fairly mature deployed system that has years of history behind it. As we add features, SLOC tends to stay approximately level or even decrease (refactoring removes old / dead code, new features are really just adjustments of existing, etc). To a non-programmer manager, a non-increasing SLOC in a development project is perplexing at best....
Clarifying in response to a recent answer below: remember, I'm arguing that SLOC is a bad metric for the purposes of measuring project progress. I'm not arguing that it is a number that's not worth collecting. It requires extensive context to do anything useful with it and most program managers don't have that context.
Someone said :
"Using SLOC to measure software progress is like using kg for measuring progress on aircraft manufacturing"
It is totally inappropriate as it encourages bad practices like :
Copy-Paste-Syndrome
discourage refactoring to make things easier
Stuffing with meaningless comments
...
The only use is that it can help you to estimate how much paper to put in the printer when you do a printout of the complete source tree.
The issue with SLOC is that it's an easy metric to game. Being productive does not equate to producing more code. So the way I've explained it to people baring what Skilldrick said is this:
The more lines of code there are the more complicated something gets.
The more complicated something gets, the harder it is to understand it.
Before I add a new feature or fix a bug I need to understand it.
Understanding takes time.
Time costs money.
Smaller code -> easier to understand -> cheaper to add new features
Bean counters can understand that.
Show them the difference between:
for(int i = 0; i < 10; i++) {
print i;
}
and
print 0;
print 1;
print 2;
...
print 9
And ask them whether 10 SLOC or 3 SLOC is better.
In response to the comments:
It doesn't take long to explain how a for loop works.
After you show them this, say "we now need to print numbers up to 100 - here's how you make that change." and show how much longer it takes to change the non-DRY code.

			
				
I disagree on SLOC being a bad metric. It may be moot to go into a years-old question with eleven answers, but I'll still add another.
Most arguments call it a bad metric because it is not suited to directly measure productivity. That is a strange argument; it assumes the metric to be used in an insane way. With this reasoning, one could call the Kelvin a bad unit because it is unsuited to measure distance.
Code length is a viable measure of ballast.
The amount of non-comment code lines correlates with:
undetected errors
maintenance costs
training time for new contributors
migration costs
new feature costs
and many more similar kinds of costs, like the cost of optimization.
Of course SLOC count isn't a precise measure of any of these. Code can be anywhere between very nice and very ugly to manage. But it can be assumed that code length is rarely free, and thus, longer code is often harder to manage.
If I were managing a team of programmers, I would very much want to keep track of the ballast it creates or removes.
Explain that SLOC is an excellent measurement of the lines of code in the application, nothing else. The number of lines in a book, or the length of a film doesn't determine how good it is. You can improve a film and shorten it, you can improve an application and reduce the lines of code.
Pretty bad (-:
A much better idea would to cover the test cases, rather than code.
The idea is this: a developer should commit a test case that fails, then commit the fix in next build, and the test case should pass ... just measure how many test cases the developer added.
As a bonus collect coverage stats (branch coverage is better than line coverage here).
You don't judge how good(how many features,how it performs..) a plane is based on its weight(sloc).
When you want your plane to fly higher, longer and perform better, you don't add weight to it. You replace parts of it with lighter/better materials. You strip off parts you don't need as to not add unnecessary weight.
I believe SLOC is a great metric. It tells you how large your system is. That is good for judging complexity and resources. And it helps you prepare the next developer for working on a codebase.
But SLOC count should be analyzed only AFTER other appropriate code quality metrics have been applied. So...
Do NOT write 2 lines of code when 1 will do, unless the 2-line
version makes the code 2 times easier to maintain.
Do NOT fluff code with unnecessary comments just to fluff SLOC count.
Do NOT pay people by SLOC count.
I have been managing software projects for 30 years. I use SLOC count all the time, to help understand mature systems. I have never found it useful to even glance at SLOC count until a project is near version 1.0 release.
Basically, during the development process, I worry about quality, performance, usability, and conformance to specifications. Get those right, and the project will probably be a success. When the dust settles, look at SLOC count. You might be surprised that you got SO much out of 5,000 lines of code. And you might be surprised that you got SO little! (But SLOC count does not affect quality, performance, usability, and conformance to specification.)
And always code like the person who will be working on your code next is a violent psychopath who knows where you live.
Cheers,
Uncle Chip
even modern code metrics tools criticize SLOC conting, i like the point made in the ProjectCodeMeter FAQ:
What's wrong with counting Lines Of Code (SLOC / LLOC)?
Why SLOC is bad as an individual metric of productivity
Think of code as a block of clay/stone. You need to carve, say 10 statues. It's not how many statues you carve that counts. It's how well you've carved it that counts. Similarly it's not how many lines you've written but how well they are functioning. In case of code LOC can backfire as a metric this way.
Productivity also changes when writing a complex piece of code. It takes a second to write a print statement but a lot of time to write a complex piece of logic. Not all fingers are equal.
How SLOC can be used to your benefit
I think SLOC for defect % is a good metric. Yes the difficulty level comes into play but this is a good parameter that the managers can throw around while doing business. Try to think from their perspective too. They don't hate you or your work, but they need to tell customers that you're the best and for that they need something tangible. Give them what you can :)
SLOC can be changed dramatically by putting extra empty lines ("for readability") or by putting or removal of comments. So relying on SLOC only can lead to confusion.
Why don't they understand that the SLOC hasn't changed, but the software does more than it did yesterday because you've added new features, or fix bugs?
Now explain it to them like this. Measuring how much work was done in your code by comparing the lines of code is the same as measuring how many features are in your cell phone comparing it by size. Cell phones have decreased in size over 20 years time while adding more features because of technological improvements and techniques. Good code follows this same principal as we can express the same logic in fewer and fewer lines of code, making it faster to run, easier to maintain, and simpler to understand as we improve our understanding of the problem and introduce new techniques for development.
I would get them to focus on the business value returned through feature development, maintenance, and bug fixes. If whoever is happy with the software says they can see improvement don't sweat the SLOC.
Go read this:
https://stackoverflow.com/questions/3800707/what-is-negative-code

Looking for examples where knowledge of discrete mathematics is helpful [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Inspired after watching Michael Feather's SCNA talk "Self-Education and the Craftsman", I am interested to hear about practical examples in software development where discrete mathematics have proved helpful.
Discrete math has touched every aspect of software development, as software development is based on computer science at its core.
http://en.wikipedia.org/wiki/Discrete_math
Read that link. You will see that there are numerous practical applications, although this wikipedia entry speaks mainly in theoretical terms.
Techniques I learned in my discrete math course from university helped me quite a bit with the Professor Layton games.
That counts as helpful... right?
There are a lot of real-life examples where map coloring algorithms are helpful, besides just for coloring maps. The question on my final exam had to do with traffic light programming on a six-way intersection.
As San Jacinto indicates, the fundamentals of programming are very much bound up in discrete mathematics. Moreover, 'discrete mathematics' is a very broad term. These things perhaps make it harder to pick out particular examples. I can come up with a handful, but there are many, many others.
Compiler implementation is a good source of examples: obviously there's automata / formal language theory in there; register allocation can be expressed in terms of graph colouring; the classic data flow analyses used in optimizing compilers can be expressed in terms of functions on lattice-like algebraic structures.
A simple example the use of directed graphs is in a build system that takes the dependencies involved in individual tasks by performing a topological sort. I suspect that if you tried to solve this problem without having the concept of a directed graph then you'd probably end up trying to track the dependencies all the way through the build with fiddly book-keeping code (and then finding that your handling of cyclic dependencies was less than elegant).
Clearly most programmers don't write their own optimizing compilers or build systems, so I'll pick an example from my own experience. There is a company that provides road data for satnav systems. They wanted automatic integrity checks on their data, one of which was that the network should all be connected up, i.e. it should be possible to get to anywhere from any starting point. Checking the data by trying to find routes between all pairs of positions would be impractical. However, it is possible to derive a directed graph from the road network data (in such a way as it encodes stuff like turning restrictions, etc) such that the problem is reduced to finding the strongly connected components of the graph - a standard graph-theoretic concept which is solved by an efficient algorithm.
I've been taking a course on software testing, and 3 of the lectures were dedicated to reviewing discrete mathematics, in relation to testing. Thinking about test plans in those terms seems to really help make testing more effective.
Understanding of set theory in particular is especially important for database development.
I'm sure there are numerous other applications, but those are two that come to mind here.
Just example of one of many many...
In build systems it's popular to use topological sorting of jobs to do.
By build system I mean any system where we have to manage jobs with dependency relation.
It can be compiling program, generating document, building building, organizing conference - so there is application in task management tools, collaboration tools etc.
I believe testing itself properly procedes from modus tollens, a concept of propositional logic (and hence discrete math), modus tollens being:
P=>Q. !Q, therefore !P.
If you plug in "If the feature is working properly, the test will pass" for P=>Q, and then take !Q as given ("the test did not pass"), then, if all these statements are factually correct, you have a valid, sound basis for returning the feature for a fix. By contrast, many, maybe most testers operate by the principle:
"If the program is working properly, the test will pass. The test passed, therefore the program is working properly."
This can be written as: P=>Q. Q, therefore P.
But this is the fallacy of "affirming the consequent" and does not show what the tester believes it shows. That is, they mistakenly believe that the feature has been "validated" and can be shipped. When Q is given, P may in fact either be true or it may be untrue for P=>Q, and this can be shown with a truth table.
Modus tollens is core to Karl Popper's notion of science as falsification, and testing should proceed in much the same way. We're attempting to falsify the claim that the feature always works under every explicit and implicit circumstance, rather than attempting to verify that it works in the narrow sense that it can work in some proscribed way.

Essential techniques for pinpointing missing requirements?

An initial draft of requirements specification has been completed and now it is time to take stock of requirements, review the specification. Part of this process is to make sure that there are no sizeable gaps in the specification. Needless to say that the gaps lead to highly inaccurate estimates, inevitable scope creep later in the project and ultimately to a death march.
What are the good, efficient techniques for pinpointing missing and implicit requirements?
This question is about practical techiniques, not general advice, principles or guidelines.
Missing requirements is anything crucial for completeness of the product or service but not thought of or forgotten about,
Implicit requirements are something that users or customers naturally assume is going to be a standard part of the software without having to be explicitly asked for.
I am happy to re-visit accepted answer, as long as someone submits better, more comprehensive solution.
Continued, frequent, frank, and two-way communication with the customer strikes me as the main 'technique' as far as I'm concerned.
It depends.
It depends on whether you're being paid to deliver what you said you'd deliver or to deliver high quality software to the client.
If the former, simply eliminate ambiguity from the specifications and then build what you agreed to. Try to stay away from anything not measurable (like "fast", "cool", "snappy", etc...).
If the latter, what Galwegian said + time or simply cut everything not absolutely drop-dead critical and build that as quickly as you can. Production has a remarkable way of illuminating what you missed in Analysis.
evaluate the lifecycle of the elements of the model with respect to a generic/overall model such as
acquisition --> stewardship --> disposal
do you know where every entity comes from and how you're going to get it into your system?
do you know where every entity, once acquired, will reside, and for how long?
do you know what to do with each entity when it is no longer needed?
for a more fine-grained analysis of the lifecycle of the entities in the spec, make a CRUDE matrix for the major entities in the requirements; this is a matrix with the operations/applications as the rows and the entities as the columns. In each cell, put a C if the application Creates the entity, R for Reads, U for Updates, D for Deletes, or E for "Edits"; 'E' encompasses C,R,U, and D (most 'master table maintenance' apps will be Es). Then check each column for C,R,U, and D (or E); if one is missing (except E), figure out if it is needed. The rows and columns of the matrix can be rearranged (manually or using affinity analysis) to form cohesive groups of entities and applications which generally correspond to subsystems; this may assist with physical system distribution later.
It is also useful to add a "User" entity column to the CRUDE matrix and specify for each application (or feature or functional area or whatever you want to call the processing/behavioral aspects of the requirements) whether it takes Input from the user, produces Output for the user, or Interacts with the user (I use I, O, and N for this, and always make the User the first column). This helps identify where user-interfaces for data-entry and reports will be required.
the goal is to check the completeness of the specification; the techniques above are useful to check to see if the life-cycle of the entities are 'closed' with respect to the entities and applications identified
Here's how you find the missing requirements.
Break the requirements down into tiny little increments. Really small. Something that can be built in two weeks or less. You'll find a lot of gaps.
Prioritize those into what would be best to have first, what's next down to what doesn't really matter very much. You'll find that some of the gap-fillers didn't matter. You'll also find that some of the original "requirements" are merely desirable.
Debate the differences of opinion as to what's most important to the end users and why. Two users will have three opinions. You'll find that some users have no clue, and none of their "requirements" are required. You'll find that some people have no spine, and things they aren't brave enough to say out loud are "required".
Get a consensus on the top two or three only. Don't argue out every nuance. It isn't possible to envision software. It isn't possible for anyone to envision what software will be like and how they will use it. Most people's "requirements" are descriptions of how the struggle to work around the inadequate business processes they're stuck with today.
Build the highest-priority, most important part first. Give it to users.
GOTO 1 and repeat the process.
"Wait," you say, "What about the overall budget?" What about it? You can never know the overall budget. Do the following.
Look at each increment defined in step 1. Provide a price-per-increment. In priority order. That way someone can pick as much or as little as they want. There's no large, scary "Big Budgetary Estimate With A Lot Of Zeroes". It's all negotiable.
I have been using a modeling methodology called Behavior Engineering (bE) that uses the original specification text to create the resulting model when you have the model it is easier to identify missing or incomplete sections of the requirements.
I have used the methodolgy on about six projects so far ranging from less than a houndred requirements to over 1300 requirements. If you want to know more I would suggest going to www.behaviorengineering.org there some really good papers regarding the methodology.
The company I work for has created a tool to perform the modeling. The work rate to actually create the model is about 5 requirements for a novice and an expert about 13 requirements an hour. The cool thing about the methodolgy is you don't need to know really anything about the domain the specification is written for. Using just the user text such as nouns and verbs the modeller will find gaps in the model in a very short period of time.
I hope this helps
Michael Larsen
How about building a prototype?
While reading tons of literature about software requirements, I found these two interesting books:
Problem Frames: Analysing & Structuring Software Development Problems by Michael Jackson (not a singer! :-).
Practical Software Requirements: A Manual of Content and Style by Bendjamen Kovitz.
These two authors really stand out from the crowd because, in my humble opinion, they are making a really good attempt to turn development of requirements into a very systematic process - more like engineering than art or black magic. In particular, Michael Jackson's definition of what requirements really are - I think it is the cleanest and most precise that I've ever seen.
I wouldn't do a good service to these authors trying to describe their aproach in a short posting here. So I am not going to do that. But I will try to explain, why their approach seems to be extremely relevant to your question: it allows you to boil down most (not all, but most!) of you requirements development work to processing a bunch of check-lists* telling you what requirements you have to define to cover all important aspects of the entire customer's problem. In other words, this approach is supposed to minimize the risk of missing important requirements (including those that often remain implicit).
I know it may sound like magic, but it isn't. It still takes a substantial mental effort to come to those "magic" check-lists: you have to articulate the customer's problem first, then analyze it thoroughly, and finally dissect it into so-called "problem frames" (which come with those magic check-lists only when they closely match a few typical problem frames defined by authors). Like I said, this approach does not promise to make everything simple. But it definitely promises to make requirements development process as systematic as possible.
If requirements development in your current project is already quite far from the very beginning, it may not be feasible to try to apply the Problem Frames Approach at this point (although it greatly depends on how your current requirements are organized). Still, I highly recommend to read those two books - they contain a lot of wisdom that you may still be able to apply to the current project.
My last important notes about these books:
As far as I understand, Mr. Jackson is the original author of the idea of "problem frames". His book is quite academic and theoretical, but it is very, very readable and even entertaining.
Mr. Kovitz' book tries to demonstrate how Mr. Jackson ideas can be applied in real practice. It also contains tons of useful information on writing and organizing the actual requirements and requirements documents.
You can probably start from the Kovitz' book (and refer to Mr. Jackson's book only if you really need to dig deeper on the theoretical side). But I am sure that, at the end of the day, you should read both books, and you won't regret that. :-)
HTH...
I agree with Galwegian. The technique described is far more efficient than the "wait for customer to yell at us" approach.