we have an ERP system running in our company based on Progress 8. Can you give an indication how compatible OpenEdge 11 is to version 8? Is it like "compile the source" and it will run (of course testing :-)) or more like every second line will need rework?
I know it's a general question but maybe you can provide a general answer? :o)
Thanks,
Gunter
Yes. Convert the db and recompile.
Sometimes you might run across keyword conflicts. A quick fix for that is the -k parameter (the "keyword forget list"). Using -k is a quick way to get old code that has variables or table/field names that have become new keywords to compile while you work on changing the names.
You might also see the occasional situation where the compiler has tightened up the rules a bit. For instance, there was some tightening of rules around defining shared variables in the v8/v9 time frame -- most of what I remember about that was looking at the impacted code and asking myself "how did that ever compile to start with?"
Another potential issue -- if your application uses a framework (such as "smart objects") whose API might change from release to release it is important to make sure that you compile against the version of that framework that your code requires -- not something newer but different.
Obviously you need to test but the overwhelmingly vast majority of code recompiles and runs without any issues.
We just did the conversion from Progress 8.3E to OpenEdge 11 a few days ago. It went on much like Tom wrote. Convert and recompile.
The only problem was one database that was originally created in Progress version 7 . Here conversion failed - but since it was a small database, it was quicker to dump , recreate and load.
Related
I am using Eclipse (version: Kepler Service Release 1) with Prolog Development Tool (PDT) plug-in for Prolog development in Eclipse. Used these installation instructions: http://sewiki.iai.uni-bonn.de/research/pdt/docs/v0.x/download.
I am working with Multi-Agent IndiGolog (MIndiGolog) 0 (the preliminary prolog version of MIndiGolog). Downloaded from here: http://www.rfk.id.au/ramblings/research/thesis/. I want to use MIndiGolog because it represents time and duration of actions very nicely (I want to do temporal planning), and it supports planning for multiple agents (including concurrency).
MIndiGolog is a high-level programming language based on situation calculus. Everything in the language is exactly according to situation calculus. This however does not fit with the project I'm working on.
This other high-level programming language, Incremental Deterministic (Con)Golog (IndiGolog) (Download from here: http://sourceforge.net/p/indigolog/code/ci/master/tree/) (also made with Prolog), is also (loosly) based on situation calculus, but uses fluents in a very different way. It makes use of causes_val-predicates to denote which action changes which fluent in what way, and it does not include the situation in the fluent!
However, this is what the rest of the team actually wants. I need to rewrite MIndiGolog so that it is still an offline planner, with the nice representation of time and duration of actions, but with the causes_val predicate of IndiGolog to change the values of the fluents.
I find this extremely hard to do, as my knowledge in Prolog and of situation calculus only covers the basics, but they see me as the expert. I feel like I'm in over my head and could use all the help and/or advice I can get.
I already removed the situations from my fluents, made a planning domain with causes_val predicates, and tried to add IndiGolog code into MIndiGolog. But with no luck. Running the planner just returns "false." And I can make little sense of the trace, even when I use the GUI-tracer version of the SWI-Prolog debugger or when I try to place spy points as strategically as possible.
Thanks in advance,
Best, PJ
If you are still interested (sounds like you might not be): this isn't actually very hard.
If you look at Reiter's book, you will find that causes_vals are just effect axioms, while the fluents that mention the situation are usually successor-state-axioms. There is a deterministic way to convert from the former to the latter, and the correct interpretation of the causes_vals is done in the implementation of regression. This is always the same, and you can just copy that part of Prolog code from indiGolog to your flavor.
I have a medium to large size system built in perl, that has been developed during the last 15 years and is built of many scripts and pm files,
and in order to improve the system i need more data, the easiest way as i see it to get this data is to have every function in the code to print out the start and end time to some log so it will be possible the understand what is taking the most time.
however this is an old system and some parts are less maintainable than others and on top of it i need it to be running which means in order to get real data i need it to print this out from production.
what i want to do is to override in some way the function declration to wrap each function start in a line like
NAME start STARTTIME PARAMS
and when it leaves the function
NAME ended STARTTIME PARAMS
does anybody can point me to the right direction?
Thanks
Take a look at Devel::NYTProf. It can profile the amount of time that all of your subs are taking (and do a lot more). It doesn't involve a lot of messy code modification; instead you just run your script with it:
perl -d:NYTProf your_script.pl
Previous answers are spot on (I especially recommend Devel::NYTProf). However, a more general technique you could apply in general to gather data about your subroutines' behaviour is fiddling with the symbol table, "appending" (or prepending) code to the actual sub's code.
A couple of pointers:
In Perl, can I call a method before executing every function in a package? (this answer shows a code example you could adapt to your particular situation)
Hook::LexWrap is a module that lets you augment subroutine behaviour in several ways, without touching the original code
HTH
sounds like you need to use a profiler
http://www.perl.org/about/whitepapers/perl-profiling.html
Perl profilers usually have a huge impact on the program performance, so using them in production may not be a great idea.
You can try Devel::ContinuousProfiler that claims to have very low impact (I myself have never used it, just discovered it this morning!)
I have a rather large project where compilation takes more than 1 hour on a Mac with i5 processor.
Just changing one little piece of code at one place makes the complete long compilation necessary.
Is there any way to reduce this time?
I was thinking about "precompiling of classes" or "pre-linking" if there is anything like that.
Even uploading a little app to a device takes 10 seconds.
ps Anyone can provide some experience whether xCode4.3 is faster on the new Mac Retinas in this context?
Many thanks!
1) Use a precompiled header and remove any imports of those files (UIKite, Foundation, Cocoa, etc) which Xcode adds when you create classes)
2) Add reasonable stable user header files in the .pch as well - to reduce the precompile work.
In your classes, make most of the imports in the implementation file (.m), not the headers. Use forward declaration when appropriate. See '#class vs. #import' and 'Importing header in objective c'
You might consider moving a stable and well confined part of your main project into a separate project and include it as a static library in the main project.
Recently I removed a few libraries that I had been referencing as .a files and moved the code in with the code. The speed increased amazingly. Compilation used to take 15min, now takes 15 seconds. Indexing used to take all day to finish (in time for shut-down), but now it is really fast. The library was on an network drive which may have been exacerbating the problem.
When a software is developed,various types of testing is done - unit,integration,functional,manual.In my current project(winforms with sql server),which has legacy code(no tests),we do have lot of bugs.
We are trying to remove them using combination of manual + tests(mostly integration)
But,still some bugs can escape.
For example(a hypothetical scenario) - if a customer has purchased some worth of goods for last 6 months,he should be given some discount on purchases he makes once 6 months have lapsed.His status should be updated to privilege.
But,for some reason(bug in the code) the system is not doing so.How should we tackle such scenarios? Should we have a script running on the database which looks for scenarios such as described? Another extension of the scenario could be,the customer must be send a gift once he is privileged,but system is missing to do so.
Thoughts?
"Should we have a script running on the database which looks for scenarios such as described?"
Do you mean "put a script in the database to correct the problem", then no.
NO. Never. Under no circumstances. Working around a bug by adding peculiar special-case logic is really a very bad idea.
When that peculiar special-case logic has it's own bugs, you've added buggy code to try and correct buggy code. A net loss.
When you try to enhance the system, you have this peculiar special-case logic that doesn't make any sense.
a. If you're lucky, you fixed the bug it was supposed to work around, and it will be redundant. What now? Which copy to remove?
b. Otherwise, it will contradict other code. What now? Which is right?
If you mean "put a script in the database to help find and debug the problem", then yes. For a short time, use every tool at your disposal to find and fix bugs. Once found and fixed, this script is then useless and must be deleted.
If you mean "write a script in the database to test the application", then yes. That's what unit test scripts are for. Use them.
It is far better to create unit tests than it is to create scripts that you put in the database. Unit tests are the best approach.
You should have an automated test-suite in place. This test-suite will implement all the scenarios that the specification requires. Since one cannot wait for six months to test that the discounting works, the actual implementation is replaced by a mock implementation (the example is in java but the same principles apply in other languages), that for example "simulates" that 6 months have passed. One can use assertions to automate the tests.
Once you have the whole test-suite ready, if all tests pass after (just as before) a refactoring/changing of the code, one can be sure that no feature has broken due to the refactoring.
How would you define "unwanted code"?
Edit:
IMHO, Any code member with 0 active calling members (checked recursively) is unwanted code. (functions, methods, properties, variables are members)
Here's my definition of unwanted code:
A code that does not execute is a dead weight. (Unless it's a [malicious] payload for your actual code, but that's another story :-))
A code that repeats multiple times is increasing the cost of the product.
A code that cannot be regression tested is increasing the cost of the product as well.
You can either remove such code or refactor it, but you don't want to keep it as it is around.
0 active calls and no possibility of use in near future. And I prefer to never comment out anything in case I need for it later since I use SVN (source control).
Like you said in the other thread, code that is not used anywhere at all is pretty much unwanted. As for how to find it I'd suggest FindBugs or CheckStyle if you were using Java, for example, since these tools check to see if a function is used anywhere and marks it as non-used if it isn't. Very nice for getting rid of unnecessary weight.
Well after shortly thinking about it I came up with these three points:
it can be code that should be refactored
it can be code that is not called any more (leftovers from earlier versions)
it can be code that does not apply to your style-guide and way-of-coding
I bet there is a lot more but, that's how I'd define unwanted code.
In java i'd mark the method or class with #Deprecated.
Any PRIVATE code member with no active calling members (checked recursively). Otherwise you do not know if your code is not used out of your scope analysis.
Some things are already posted but here's another:
Functions that almost do the same thing. (only a small variable change and therefore the whole functions is copy pasted and that variable is changed)
Usually I tell my compiler to be as annoyingly noisy as possible, that picks 60% of stuff that I need to examine. Unused functions that are months old (after checking with the VCS) usually get ousted, unless their author tells me when they'll actually be used. Stuff missing prototypes is also instantly suspect.
I think trying to implement automated house cleaning is like trying to make a USB device that guarantees that you 'safely' play Russian Roulette.
The hardest part to check are components added to the build system, few people notice those and unused kludges are left to gather moss.
Beyond that, I typically WANT the code, I just want its author to refactor it a bit and make their style the same as the rest of the project.
Another helpful tool is doxygen, which does help you (visually) see relations in the source tree.. however, if its set at not extracting static symbols / objects, its not going to be very thorough.