As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently attending the first year of college at Computer Science. I'm having great problems with something named Numerical Methods because I lack at mathematics. I don't have a basis for math concepts.
Could any of you please tell me a good book, tutorial site or videos of Numerical Methods for people that don't have a clear basic knowledge of mathematics?
I tried looking for something like "Numerical Methods for Dummies", but I didn't find anything equivalent.
For good reference books on numerical methods, you can check out the answers to the question What is the best book on numerical methods?
Bear in mind that numerical methods are a subfield of Maths. Thus you do need a clear basic knowledge of mathematics and you probably need to look for other books (not Numerical Methods ones) to fill your knowledge gaps.
Try Numerical Methods for Scientists and Engineers
Try sniffing around the internet for course notes (lecture notes) from other universities, especially for first year/second year engineering mathematics (for a general grounding) and numerical methods after that.
My university had very good lecture notes, but I can't distribute them. Other universities are more liberal.
You may also want to check out MIT OpenCourseWare, which contains lecture notes and course materials from the Massachusetts Institute of Technology. Some of the lecturers at MIT literally wrote the book on their respective field, so you can't go wrong with anything you find there.
Links to interesting courses on MIT OCW:
18.330 Introduction to Numerical Analysis
6.042J Mathematics for Computer Science
I do not know if this suits you... but you can try this:
http://apps.nrbook.com/c/index.html - Numerical Recipes ( http://www.nr.com/ )
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
In my application i am downloading thousands of records, and i have to inert them in to CoreData.
I have to estimate the time to download and process the data.
Downloading time depends on the user's interned speed.
Can you please tell me how much time it takes to insert 10000 records in CoreData?
Thanks,
Jack.
You are asking several questions in one and they have been answered already. Let me wrap it all up and give you further information regarding your question.
Time of bulk inserts: Depends on the kind of data and on the computation power / hard disk of your device. It also depends on how you actually perform the bulk insert.
Improve Performance if needed: There are many things you can do to increase the performance. If you encounter performance problems when inserting objects in bulk please have a look at the following post on Stackoverflow: Improving Performance of Bulk Inserts. There is also a chapter in the Core Data Programming guide which is called Efficiently Importing Data which you should read. Core Data Programming Guide: Efficiently Importing Data
Estimating remaining time of the download: A naive calculation of the time remaining would only take the current speed and the number of remaining bytes to download in consideration. This is usually a very bad estimation which jumps around a lot. In order to smooth the estimation you should use a Moving Average. A moving average takes previous values in consideration. An algorithm which makes use of a moving average to estimate the remaining time can be found on Stackoverflow as well.
Why don't you check yourself by inserting NSLog-commands after the download is finished and another one after the CoreData operation is finished?
The consol will give you exact timestamps of the start/end of the operations.
Like David Rönnqvist said before: In most cases downloading takes way longer than inserting items into the database.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Suppose, I have a list of values {4, 10, 3, 6, 7, 15, 11}. The average of this number list is avg=8. Now I will select only those elements > avg, which are {10, 11, 15}. Now I am doing average again and selecting the elements bigger than their average. I believe this a helpful technique to get the top rated values from a list, I am not sure about the naming of this averaging technique. Can anybody help me with some name of this method?
Thanks in advance
How about using Averoveraging?
I'm not sure what your code looks like, but I'd imagine on one pass you are returning elements higher than the average? So I would name that method ElementsGreaterThanAverage.
If you're only always doing two passes, you could call it TopQuarterByAveraging or something.
You're computing the mean, then showing everything over it. Hence OverMean.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In languages like C which supports pointer operation, you can easily get multiple values from a procedure. But in languages like Java, it is a pain if you actually need to get multiple return values. (Using an object to wrap multiple values is bad)
In my experience, allowing multiple values returned can help improve software engineering--more flexible to organise procedure invocation, etc. But why there are so many languages that do no allow returning multiple values? I am interested to know the reasons. Thank you very much.
Could be because many of the designers of these languages have strong math backgrounds and in math a function can have multiple input parameters but (almost always) only a single output value.
Also, it keeps code understandable and standardized to some extent.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm working on a telecommunications device simulator on MATLAB. I'm going to encode some digital data, modulate it, add some noise and attempt to demodulate it, see at what noise levels my data cannot be recovered anymore.
My problem is, I don't know how to import some crazy file to my workspace. It's not going to be txt or anything, just some file. How can I make MATLAB read the file in binary format or whatever it is called?
Try questions regarding to work with binary data in matlab
Working on Binary Data in Matlab
Read and write from/to a binary file in Matlab
Can you be more specific ??? Instead of specifying whatever format you could actually look at the extension and specify the extension.If it is a video then you can read it with mmreader(),if it is an image then you read it with imread().So please specify the extension of the file which you want to load into MATLAB.
H2H
-Harsha
It turns out I was using the right function with wrong parameters all along. I opened the file I wanted to open with fopen('filename') and used the number that function outputs in A=fread(thenumber). That returned an array of each and every byte in the file by their decimal values. I'm sure I'll be able to use this data for my project.
Thanks to everybody for their help!
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Is LOC correct parameter for project estimation?
there are so many scenarios where complexity takes much more time for a single line of code,
other than LOC what could be the suggested parameter for project estimation?
As peoples are talking about functional point of program does it mean for use case related information?
i am trying to find out any solid base for full software developement estimation which can consist analysis, design, testcase preparation, and coding, please suggest?
Steve McConnell in Rapid Development (Microsoft Press, 1996):
Because different programming
languages produce such different bangs
for a given number of lines of code,
much of the software industry is
moving toward a measure called
"function points" to estimate program
sizes. A function point is a synthetic
measure of program size that is based
on a weighted sum of the number of
inputs, outputs, inquiries, and files.
Function points are useful because
they allow you to think about program
size in a languageindependent way.
Google "Function Point" for more information.
Seeing as developers are likely to* spend most of their time trying to test changes, lines-of-code is never a good indicator of size of a problem.
Let's suppose you have an existing large application - changing a single line of code may seem trivial, but the test planning and execution could take weeks.
Likewise, adding a relatively large amount of code in a single limited-scope module which is easily testable might be only a few days.
* they should do, at least. If they're spending more time writing code than testing it, it is probably full of bugs. And I mean BEFORE it reaches your dedicated QA team.
Only if you use it in the inverse.
-- Edit
But no. It isn't. It's a mostly useless measure, and generally harmful. As you note, less code is almost always better.
Other things to check? Well, what are you trying to measure? What result do you want to see from a change in the things that you would be checking? What sort of decisions will you be making on the basis of these changes?
LOC is one proxy measure for measuring the problem size.
LOC estimate can be used, and LOC count is relatively cheap to measure from historical projects. But LOC can be problematic if used for anything else than a proxy for problem size, as already pointed out by other answers.
Problem size is rather constant given the requirements. From a size estimate you can go to effort, schedule and cost estimates. It depends on your planning drivers such as cost or schedule. From the historical data you can find correlation how problem size translates to effort and how other planning drivers further influence the outcome. So you need to measure size measure and effort vs. other parameters and keep on fine-tuning your estimation process. There are some LOC-to-effort measures available in the literature, but they are not very accurate in your domain, using the technology you are using, and the team you have.
Other proxies for problem size are function points and story points. My experience on function points is that they are rarely worth the effort. On the other hand, story points in agile methods work very well since they are deliberately abstract (thus avoiding a lot of problems with with LOC) and measured on a sprint-by-sprint basis, with instant feedback into following sprints.
No, it isn't. The reason is simple: if you produce a new line of code during your development, are you one step closer to a solution? If you estimated 1000 lines of code to complete a task, are you now 0.1% complete with that task?
Lines of code can be used as a metric but only in the negative sense: for a greater number of lines of code, it is reasonable to assume that you have a greater number of bugs. Based on historical data, there is generally a linear correlation between lines of code and bug count.
Here are some useful and measurable factors that are worth considering:
Hours of labor.
Dollars spent: this is a good one because it strongly enforces the reality that you'd rather find bugs at the developer's desktop than in the hands of a tester or customer).
Milestones met: is the system available for the customers on the right date?
Requirements completed: this can be a funny one - what if you discover a new customer need during the project?
In short, lines of code is very nearly the worst possible metric you could ever use.
The only way to get any reasonable estimate on project duration is to COMPLETELY implement and deliver some subset of the final requirements. Then you can estimate the remaining requirements by comparing their complexity against the completed work.