Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
How should I use the results of Devel::Cover to make changes in the code? What do I do next with my code?
Use Devel::Cover to identify which parts of your code have not been exercised by your tests. If some parts of your code are not covered by your tests, you typically would add more tests to cover all of your code.
In some cases, Devel::Cover will identify parts of your code which can not be tested. If that is the case, you may decide to delete that part of your code.
Structural coverage is a metric of how thoroughly your code has been exercised. It's normally collected while running tests and thus provides an approximation of the completeness of your test suite.
Incomplete coverage means that you have functionality that isn't being exercised and thus can't be being tested. Normally you would add more tests to increase the coverage. Missed coverage can also be an indication of unnecessary functionality (which can be removed) or logical errors that prevent full exercise of the code. It's up to you to analyze your coverage reports and determine which course of action is appropriate.
Note that "covered" just means "executed." It is not the same as "tested" and definitely not the same as "correct." I recommend setting the flags to Devel::Cover (specifically ignore, inc, and select) so you collect coverage data only for the module actively under test. This reduces the risk of incidental coverage of untested code.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
This question has some good responses, but has been moved to a more appropriate forum at this link.
Online systems, such as ALEKS, Cengage's WebAssign, and even Khan Academy employ some kind of logical matching for polynomial expressions and numerical values (ie, fractions). What unpaid tools (libraries, command line programs, scripts, etc) exist that can provide expression/numerical matching? For example, a student enters the expression
but the following expression is equivalent and would also be acceptable:
The question about how to do this mathematically has an excellent answer in this post, and a question addressing one particular way to implement this has a partial answer in this post. Sympy looks promising, but the command line Maxima could work, and so could the WolframAlpha API, Maple, MatLab, and any number of symbolic computer algebra systems.
It's fine to talk about things that "could work", but what tools are already being used? How has this already been implemented? Can anyone speak from experience about what online math learning programs are using on the backend? Give examples or direct to existing projects.
To clarify the question, I'm talking about logically comparing simple expressions(middle/high school math), minimally complicated, with canonical forms typically easy to obtain. The implementation will be online (html+nifty_tool) and input will most likely be captured as a string unless someone can suggest a better input method for math learners - a LaTeX front-end perhaps?
Providing that you could translate the student's input into Python it would be easy enough to verify the equality of expressions in most cases. For instance,
>>> from sympy import *
>>> var('p')
p
>>> f_1 = 2*p**2*(p+5)-8
>>> f_2 = 2*(p**2+4*p-4)*(p+1)
>>> f_1.expand()==f_2.expand()
True
If you have an input widget that enables a student to enter expressions of the kind displayed in your question, and that outputs LaTeX, say, then you might be able to use a parser such as https://github.com/alvinwan/tex2py to get the inputs you need for sympy.
Take a look at STACK, which is an automated system for assessing students' math answers. STACK is based on Maxima. The main web site appears to be: http://www.stack.ed.ac.uk/
I found some other links that could be interesting to you:
Moodle plugin for STACK: https://moodle.org/plugins/qtype_stack
Resources for the Moodle plugin at Github: https://github.com/maths/moodle-qtype_stack
Some description about how STACK uses Maxima: https://github.com/maths/moodle-qtype_stack/blob/master/doc/en/CAS/Maxima.md
I'm actually not sure how STACK makes use of Maxima to determine whether an answer is correct. If the form of the answer doesn't matter, then ratsimp(answer - expected) should be 0 if answer is equivalent to expected. But if the form of the answer must be verified as well, the comparison becomes more complicated. I can imagine some ways to do that, but I don't know what STACK actually does.
I see the issues forum for the Github project (https://github.com/maths/moodle-qtype_stack/issues) seems to have a fair amount of traffic, so perhaps if you run into problems you can ask for help there.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
We are using Perl extensively in our project which is used to fetch the data from DB and install several components. Now we want to upgrade it to 5 from 4.2. I would like to know what changed and what are features that the latest Perl version has over 4.2. Someone please guide me to get this done.
Some things perl4 didn't have:
local (lexically scoped) variables
data structures (beyond simple arrays-of-scalars and hashes-of-scalars)
references (including references to subroutines, which let you abstract over behavior)
closures / first-class functions
OO: classes, methods, and objects
a module system, and a way to implement parts or all of a module in C, letting you bind to external libraries
CPAN: a central repository for modules written by other people for almost any task you can think of (current count: 188,959 modules)
pragmas that can warn you about or disable dangerous and questionable operations (use strict, use warnings)
Unicode support (in strings + all core operations; encoding layers in file handles)
subroutines that can be called like (most) builtins (& and parentheses not required, special calling conventions can be enabled by using "prototypes")
tie: a variable (scalar/array/hash) can be backed by an object; operations on the variable automatically invoke methods on the object instead
threads
overridable keywords
exception handling with die/eval {}
tons and tons of regex enhancements
... and hundreds of things I don't remember and can't list here. Seriously, perl5 is a very different language from perl4, even if many perl4 features are still there.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
(The following is a kind of a "theoretical MCVE" of the kinds of complexity I'm encountering in organizing source code I'm working on. You can treat it as a concrete problem and that would be good, or you can refer to the general concerns it brings up and suggest how to address them.)
Suppose I have modules of code A, B, C and D. A depends on B,C,D; B depends on C; C, D don't depend on other modules. (I use the term "modules" loosely, so no nitpicking here please).
Additionally, in all of A,B,C,D, a few identical header files are used, and perhaps even a compiled object, and it doesn't make sense to put these together and form a fifth module because it would be too small and useless. Let's have foo.h be one of the files in that category.
While all of these modules are kept within a single monolithic code repository, all is good. There's exactly one copy of everything; no linker conflicts between objects compiled with the same functions etc.
The question is: How do I make each of B, C, D into a version-managed repository, so that:
Each of them can be built with only the presence of the modules it depends on (either as submodules/subrepositories or some other way); and
I do not need to make sure and manually maintain/update separate versions of the same files, or make carry-over commits from one library to the next (except perhaps changing the pointed-to revision); and
When everything is built together (i.e. when building A), the build does not involve qudaruple copies of foo.h and a double copy of C (once for A and once for B) - which I would probably always have to make sure and keep perfectly synchronized.
Note that when I have a bit more time I'll edit this to make the question more concrete (even though I kind of like the broad question). I will say that in my specific case the code is modern C++, CUDA, some bash scripts and CMake modules. So a Java-oriented solution would not do.
VERY BRIEFLY, you might want to explore an artifact repository (Artifactory) and dependency management solutions (Ivy, Maven, Gradle). Then as shared-base-module is built, you stick it in the artifact repo. When you want to build the top-modules that depend on shared-base-module; the build script simply pulls down the last version of shared-base-module and links/compiles top-module against what it pulled down .
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Among all the various incomplete lists of features going into Scala 2.10, there are various mentions of improvements to Scaladoc. But it's unclear which ones there are, and which ones are actually going in -- e.g. one of the lists of improvements says "fixes to Scaladoc" with links to various pull requests, some of which got rejected.
Can anyone summarize what's actually changed between Scala 2.9 and 2.10 milestone 4, and maybe indicate what else is planned for 2.10 itself?
Also, are they finally going to fix the problem of not being able to link to methods? E.g. littered throughout my code I have things like this:
/**
* Reverse the encoding computed using `encode_ngram`.
*/
def decode_ngram(ngram: String): Iterable[String] = {
DistDocument.decode_ngram_for_counts_field(ngram)
}
where I want to refer to another method in the same class, but AFAIK there's simply no way to do it. IMO it should be something obvious like [[encode_ngram]] -- i.e. I definitely shouldn't need to give an absolute class (which would make everything break as soon as I pull out a class and stick it somewhere else), and I shouldn't need to give the parameter types if the method name itself is unambiguous (i.e. non-polymorphic).
Several new features, as well as many bugfixes are coming, but there's no definitive list of all the fixes that are in, yet. Of the more notable new features:
Implicitly added members will now be visible. A good example is to look at scala.Array, where methods like map which you might've assumed you had are now visible in the Scaladoc.
Automatically-generated SVG inheritance diagrams, for a bird's eye view of relationships between classes/traits/objects at the package-level and then also at the level of individual classes etc. For example, see the Scaladoc diagrams nightly at both the package-level (click "Content Hierarchy") as well as at the class-level.
Method-linking in some limited form should go into 2.10 (not in the nightly yet). (It's actually not totally trivial to implement in its complete form, due to practical stuff like overloading, as you noted.)
Improved use cases A member with a use case isn't doubly generated anymore, and they're now a bit clearer and simpler than before.
(Less-notable) Keyboard shortcuts for navigating Scaladoc have been added, they're explained here and here
For a more exhaustive list of bugfixes, it might be a good idea to write to scala-internals-- there's a good chance someone will compile a list of all major bugfixes in the past year for you there.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
When I was looking into unit testing for iPhone projects, I found that it is hard to decide the scale of unit in unit testing, so if I have three methods A, B and C, I can test each of them, but sometimes you need to call A before B in order to make B making sense, for example, if I have addImageWithName: and removeImageWithName:, then I need to first add an image, in order to test if removeImageWithName: really works.
So it is the decision between black box single method test or functional test(functional means a function of the application which may involve more than one method), if the time is tight then I cannot go with both of them, so what is the pros and cons of these two approaches?
What I can think of:
=== single method test ===
pros:
- easy to write test case, as you only need to deal with input/output of individual methods
cons:
- methods need to highly decoupled, so one method does not rely on another
- sometimes impossible for example the undo method has to rely on a 'do' method.
=== functional test ===
pros:
- higher level than per method test, as this targets at functions of the app
cons:
- not easy to write test case, if the function is complicated
- may not cover all the cases for each individual method involved in a particular function
So what should be the correct decision?
Thanks !
Single method test it the best way to write unit test in xcode. Anyway if your function depends on another function to complete the you can use asynchronous unit test, Use GHUnit test framework for testing the async methods.BTW: what you are using, OCUnit or GHUnit for testing?hope this help