I'm working on reverse engineering / decompiling an APK file - I was able to use:
http://www.decompileandroid.com/
I'm sure APK tool is a better option (I'd love to hear the reasons why though) but in this instance it worked - sorta.
My problem - and the root of my question/post: is I ended up with over 4000 eclipse errors when I import the source (thankfully they seem to be just a small handful of similar/related errors repeated many many times).
That being said - is there a better method of going about this in order to avoid these errors? (shown below)
Eclipse Errors:
https://docs.google.com/document/d/1gwbZuJ8duQ37JRGeTdqIrv0o_DBNL_xWRxrG9Xxxwy4/edit?usp=sharing
I do not know of any Java decompiler that will reliably produce output that can be "round-tripped" (decompiled, then recompiled). There are a few in active development, my own included, for which you could submit bug reports. In the case of Procyon, type inference has become increasingly broken over time, particularly where generics are concerned. Then there are a host of other problems that primarily affect classes converted from Android format.
JARs created by tools like dex2jar tend to be much harder to process because they produce tricky exception handler tables, oddly ordered blocks, local variable slot sharing, etc. I would recommend trying a few different combinations of tools: straight Android decompilers as well as different dex-to-jar rewriters paired with various Java decompilers. You may find that one combination of tools consistently yields better results than others.
That said, I will reiterate my usual advice: never trust the output from a decompiler. Do not assume it is correct, even if it compiles cleanly.
Related
TL;DR Version:
This question has arisen due to the fact that I have multiple frameworks (which I have built) and a client project that uses said frameworks. Now, when I open up the client project and try to debug into the framework, it doesn't work.
However, if I have the project associated with the framework open, then debugging appears to work (though there are some weird issues with breakpoints I don't see being triggered).
I have looked at Apple's docs, and perhaps the answer is buried there somewhere, but I couldn't find it on a skim of the Xcode Debugging Guide.
Long Version:
The reason this question is important to me is that a coworker and I had a disagreement about how headers are imported in the frameworks we build.
I have a tendency to use framework headers (with client apps) in the fashion:
#import "FrameworkA/HeaderA.h"
#import "FrameworkB/HeaderB.h"
He, on the other hand, favors importing the framework headers (with client apps) like this:
#import "HeaderA.h"
#import "HeaderB.h"
and specifying the header search paths in the build target of the client application.
Complicating matters further is the fact that some of these frameworks have interdependencies. For example, FrameworkB has headers from FrameworkA referenced in his format:
#import "HeaderA.h"
His argument for doing this is that debugging only works if we import headers this way. It seems dubious to me that there would be a relation between header importing style and debugging, but I am not really certain how XCode chooses the file to link to during debugging, hence the question.
Thanks in advance for any assistance with this query.
you add project references to the target, and make sure Xcode knows where to find the debug symbols.
#import <FrameworkA/HeaderA.h>
that's the way to go (for internal and external declarations). the reason? the other approach is more likely to cause issues as libraries evolve. the additional qualification disambiguates any case (unless of course there are two FrameworkA/s in your search path), it's best to qualify the file explicitly now, rather than when your clients tell you they cannot use your library with other libraries, or that they can only use them in some conditions. then you have to go fix the issues and reship (this stuff has a way of happening at inconvenient times =p). it's one simple measure to ensure you've developed a robust interface.
perhaps the most important part that people overlook is the location of the products: use a customized central build location for your targets -- many people use the default location, which is by the xcodeproject. otherwise, Xcode may not be able to locate debug information.
finally, debugging complex projects in Xcode can be quite... let's call it 'problematic'. So don't expect the debugging experience to be perfect, even if you've configured everything correctly. all the more reason to integrate assertions and unit tests into your development cycle early on with Xcode. truth is, the debugger may be useless no matter how hard you try - this is not a new issue. hopefully LLDB will improve our debugging experiences.
good luck
I've looked at Microsoft's MSDN and all around the web, but I still haven't been able to get a really good idea of what it is.
Does it mean the completed program loads DLLs at different times during its execution, as apposed to all at once upon launch?
Am I totally way off? :)
Linking involves packaging together all of the .obj files built from your source files, as well as any .lib files you reference, into your output (eg .exe or .dll).
Without incremental linking, this has to be done from scratch each time.
Incremental linking links your exe/dll in a way which makes it easier for the linker to update the existing exe/dll when you make a small change and re-compile.
So, incremental linking just makes it faster to compile and link your project.
The only runtime effect it might have is that it may make your exe/dll slightly bigger and slower, as decribed here:
http://msdn.microsoft.com/en-us/library/4khtbfyf.aspx
Edit: As mentioned by Logan, incremental linking is also incompatible with link time code generation - therefore losing a possible performance optimization.
You may want to use incremental linking for debug builds to speed development, but disable it for release builds to improve runtime performance.
Delay loaded DLLs may be what you are thinking of:
http://msdn.microsoft.com/en-us/library/151kt790.aspx
Also, quite importantly, incremental link is a prerequisite for Edit&Continue - possibily to edit your code and recompile it on the fly, without restarting.
So it is a good thing to have on debug builds, but not release builds.
I am building up some statistics after analyzing the source code in eclipse. But the overall process is too slow because i rebuild my model every time from scratch after each compilation.
I am looking for a way to get only the changed parts of the code (as ASTNodes) and to rebuild just that part of my model. I suppose that even the changed compilation units and not the exact code elements would be enough after the user compiles and still would be a nice optimization.
I am sure eclipse is capable of knowing what code elements are changed (and even to know their semantics), because when I use the subclipse plugin my changes are ordered by a code element (an import, a method, a variable declaration, etc). Well.. at least that plugin is capable of knowing that info.
Thanks in advance
The Eclipse builder infrastructure is created for exactly this reason. For a start I suggest the following article and FAQ entry.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've seen questions about IDE's here -- Which is the best IDE for Scala development? and What is the current state of tooling for Scala?, but I've had mixed experiences with IDEs. Right now, I'm using the Eclipse IDE with the automatic workspace refresh option, and KDE 4's Kate as my text editor. Here are some of the problems I'd like to solve:
use my own editor IDEs are really geared at everyone using their components. I like Kate better, but the refresh system is very annoying (it doesn't use inotify, rather, maybe a 10s polling interval). The reason I don't use the built-in text editor is because broken auto-complete functionalities cause the IDE to hang for maybe 10s.
rebuild only modified files The Eclipse build system is broken. It doesn't know when to rebuild classes. I find myself almost half of the time going to project->clean. Worse, it seems even after it has finished building my project, a few minutes later it will pop up with some bizarre error (edit - these errors appear to be things that were previously solved with a project > clean, but then come back up...). Finally, setting "Preferences / Continue launch if project contains errors" to "prompt" seems to have no effect for Scala projects (i.e. it always launches even if there are errors).
build customization I can use the "nightly" release, but I'll want to modify and use my own Scala builds, not the compiler that's built into the IDE's plugin. It would also be nice to pass [e.g.] -Xprint:jvm to the compiler (to print out lowered code).
fast compiling Though Eclipse doesn't always build right, it does seem snappy -- even more so than fsc.
I looked at Ant and Maven, though haven't employed either yet (I'll also need to spend time solving #3 and #4). I wanted to see if anyone has other suggestions before I spend time getting a suboptimal build system working. Thanks in advance!
UPDATE - I'm now using Maven, passing a project as a compiler plugin to it. It seems fast enough; I'm not sure what kind of jar caching Maven does. A current repository for Scala 2.8.0 is available [link]. The archetypes are very cool, and cross-platform support seems very good. However, about compile issues, I'm not sure if fsc is actually fixed, or my project is stable enough (e.g. class names aren't changing) -- running it manually doesn't bother me as much. If you'd like to see an example, feel free to browse the pom.xml files I'm using [github].
UPDATE 2 - from benchmarks I've seen, Daniel Spiewak is right that buildr's faster than Maven (and, if one is doing incremental changes, Maven's 10 second latency gets annoying), so if one can craft a compatible build file, then it's probably worth it...
Points 2 and 4 are extremely difficult to manage with the current scalac. The problem is that Scala's compiler is a little dumb about building files. Basically, it will build whatever you feed it, regardless of whether or not that file really needs to be built. Scala 2.8.0 will have some tremendous improvements in this respect, but until then... Eclipse SDT actually has some very elaborate (and very hackish) code for doing change detection and dependency tracking. On the whole, it does a decent job, but as you have seen, there are wrinkles. Eclipse SDT 2.8.0 will rely on the aforementioned improvements to scalac itself.
So, building only modified files is pretty much out of the question. Aside from SDT, the only tool I know of which even tries this is SBT (Simple Build Tool). It uses a compiler plugin to track files as they are compiled and query the dependency graph computed by the compiler itself. In practice, this yields about a 50% improvement over the recompile-the-world approach. Once again, this is a hack to get around deficiencies in pre-2.8.0 scalac.
The good news is that reasonably fast compilation is still achievable even without worrying about change detection. FSC uses the same technology (ooh, that sounded so "Charlie Eppes") that Eclipse SDT uses to implement fast incremental compilation. In short, it's pretty snappy.
Personally, I use Apache Buildr. Its configuration is significantly cleaner than either Maven's or SBT's and its startup time is orders of magnitude less (when running under MRI). It integrates with FSC and attempts to do some basic change detection on its own (fairly primitive). It also has auto-magical support for the major Scala test frameworks (ScalaTest, ScalaCheck and Specs) as well as support for joint compilation with Java sources and IDE meta generation for IntelliJ and Eclipse. Oh, and it supports all of Maven's features (dependency resolution, etc) and then some. I'm even working on an extension which would allow interactive shell support integrated with JavaRebel and supporting several shell providers (Scala, JIRB, Clojure REPL, etc). It's not ready for the SVN yet, but I'll commit once it's ready (possibly in time for 1.3.5).
As you can see, I'm very firmly of the opinion that Buildr is the best Scala build tool out there. Its documentation is a little spotty where Scala is concerned, but that's because everything is so straightforward that it's hard to document without feeling verbose. You can always check out one of my GitHub repositories for examples. Good luck!
Have you looked at Intellij IDEA and its Scala integration ? Intellij has a loyal (fanatical?) following amongst Java developers, so you may find this is appropriate for your needs.
Am also quite frustrated with the scala plugin on Eclipse and I can add a few more problems to the list:
auto-complete only works some of the time
the debugger doesn't work properly (especially when trying to debug scala xml)
the debugger forgets breakpoints
'go to definition' doesn't work more often than not.
I'm glad to hear that Buildr sounds like a better alternative (on the build front anyhow), I'll give that a try - thanks!
If you use Emacs, I think Ensime is a pretty good IDE. I think at the time writing, Ensime is the only IDE that will give you fast and accurate autocompletion on both Scala and Java objects, including implicit conversions.
There's code browsing support using Speedbar, code templates using the excellent Yasnippet, and code completion menu using Autocomplete. These are all very modern, actively maintained Emacs packages. There's also out of the box incremental building support for Maven and SBT.
There's a lot more in there such as interactive debugging, refactoring, and the Scala interpreter in an inferior process. All the things you want in a modern IDE for Scala is already there in Ensime. Highly recommended for Emacsens.
For the reasons of completeness, I have to say that there is also Pants -- the build tool that in use in Twitter (one of the early scala adopters)
The main difference it that it is intended not only for scala (and written in python, by the way) and is modeled after google build system.
It's not so bloated as sbt, so for the freshmans it's much simplier, but I've never heard about Pants usage outside of twitter and foursquare.
If you scared of SBT, maybe another no-so-popular build tool, ABT, could be an alternative for you?
I went down the same road, and here is where I am at:
- After some initial investigation, I dropped Kate. I love to use it for most things, but when it came to things like defining tab completions, I found it sorely lacking. I would recommend that you look into gedit instead, which is much more robust for Scala development
- With gedit as my editor, I use SBT and have found it to be a great build tool. I can put it into a 'test' mode where when any code changes it recompiles the relevant files and runs my test suite. This has been an extremely effective way to work.
I have not taken a look at Buildr yet. I would like to say that I will, but honestly with SBT at my disposal I don't really have a compelling need to look at another build tool.
If you want to use Eclipse, but build the project using sbt, and still be able to debug, take a look at this post here:
zikaprog.wordpress.com/2010/04/19/scala-eclipse-sbt-and-debugging/
It also can be applied to builders other than sbt.
The latest version of the Maven Scala plugin supports Zinc/Nailgun for faster start times and faster incremental builds. See Zinc and Incremental Compilation.
Almost any IDE creates lots of files that have nothing to do with the application being developed, they are generated and mantained by the IDE so he knows how to build the application, where the version control repository is and so on.
Should those files be kept under version control along with the files that really have something to do with the aplication (source code, application's configuration files, ...)?
The things is: on some IDEs if you create a new project and then import it into the version-control repository using the version-control client/commands embedded in the IDE, then all those files are sent to the respitory. And I'm not sure that's right: what is two different developers working on the same project want to use two different IDEs?
I want to keep this question agnostic avoiding references to any particular IDE, programming language or version control system. So this question is not exactly the same as these:
SVN and binaries - but this talks about binaries and SVN
Do you keep your build tools in version control? - but this talks about build tools (e.g. putting the jdk under version control)
What project files shouldn’t be checked into SVN - but this talks about SVN and dll's
Do you keep your project files under version control? - very similar (haven't found it before), thanks VonC
Rules of thumb:
Include everything which has an influence on the build result (compiler options, file encodings, ASCII/binary settings, etc.)
Include everything to make it possible to open the project from a clean checkout and being able to compile/run/test/debug/deploy it without any further manual intervention
Don't include files which contain absolute paths
Avoid including personal preferences (tab size, colors, window positions)
Follow the rules in this order.
[Update] There is always the question what should happen with generated code. As a rule of thumb, I always put those under version control. As always, take this rule with a grain of salt.
My reasons:
Versioning generated code seems like a waste of time. It's generated right? I can get it back at a push of a button!
Really?
If you had to bite the bullet and generate the exact same version of some previous release without fail, how much effort would it be? When generating code, you not only have to get all the input files right, you also have to turn back time for the code generator itself. Can you do that? Always? As easy as it would be to check out a certain version of the generated code if you had put it under version control?
And even if you could, could you ever be sure that didn't miss something?
So on one hand, putting generated code under version control make sense since it makes it dead easy to do what VCS are meant for: Go back in time.
Also it makes it easy to see the differences. Code generators are buggy, too. If I fix a bug and have 150'000 files generated, it helps a lot when I can compare them to the previous version to see that a) the bug is gone and b) nothing else changed unexpectedly. It's the unexpected part which you should worry about. If you don't, let me know and I'll make sure you never work for my company ever :-)
The major pain point of code generators is stability. It doesn't do when your code generator just spits out a random mess of bytes every time you run (well, unless you don't care about quality). Code generators need to be stable and deterministic. You run them twice with the same input and the output must be identical down to least significant bit.
So if you can't check in generated code because every run of the generator creates differences that aren't there, then your code generator has a bug. Fix it. Sort the code when you have to. Use hash maps that preserve order. Do everything necessary to make the output non-random. Just like you do everywhere else in your code.
Generated code that I might not put under version control would be documentation. Documentation is somewhat of a soft target. It doesn't matter as much when I regenerate the wrong version of the docs (say, it has a few typos more or less). But for releases, I might do that anyway so I can see the differences between releases. Might be useful, for example, to make sure the release notes are complete.
I also don't check in JAR files. As I do have full control over the whole build and full confidence that I can get back any version of the sources in a minute plus I know that I have everything necessary to build it without any further manual intervention, why would I need the executables for? Again, it might make sense to put them into a special release repo but then, better keep a copy of the last three years on your company's web server to download. Think: Comparing binaries is hard and doesn't tell you much.
I think it's best to put anything under version control that helps developers to get started quickly, ignoring anything that may be auto-generated by an IDE or build tools (e.g. Maven's eclipse plugin generates .project and .classpath - no need to check these in). Especially avoid files that change often, that contain nothing but user preferences, or that conflict between IDEs (e.g. another IDE that uses .project just like eclipse does).
For eclipse users, I find it especially handy to add code style (.settings/org.eclipse.jdt.core.prefs - auto formatting on save turned on) to get consistently formatted code.
Everything that can be automatically generated from the source+configuration files should not be under the version control! It only causes problems and limitations (like the one you stated - using 2 different project files by different programmers).
Its true not only for IDE "junk files" but also for intermediate files (like .pyc in python, .o in c etc).
This is where build automation and build files come in.
For example, you can still build the project (the two developers will need the same build software obviously) but they then could in turn use two different IDE's.
As for the 'junk' that gets generated, I tend to ignore most if it. I know this is meant to be language agnostic but consider Visual Studio. It generates user files (user settings etc..) this should not be under source control.
On the other hand, project files (used by the build process) most certainly should. I should add that if you are on a team and have all agreed on an IDE, then checking in IDE specific files is fine providing they are global and not user specific and/or not needed.
Those other questions do a good job of explaining what should and shouldn't be checked into source control so I wont repeat them.
In my opinion it depends on the project and environment. In a company environment where everybody is using the same IDE it can make sense to add the IDE files to the repository. While this depends a bit on the IDE, as some include absolute paths to things.
For a project which is developed in different environments it doesn't make sense and will be pain in the long run as the project files aren't maintained by all developers and make it harder to find "relevant" things.
Anything that would be devastating if it were lost, should be under version control.
In my opinion, anything needed to build the project (code, make files, media, databases with required program info, etc) should be in repositories. I realise that especially for media/database files this is contriversial, but to me if you can't branch and then hit build the source control's not doing it's job. This goes double for distributed systems with cheap branch creation/merging.
Anything else? Store it somewhere different. Developers should choose their own working environment as much as possible.
From what I have been looking at with version control, it seems that most things should go into it - e.g. source code and so on. However, the problem that many VCS's run into is when trying to handle large files, typically binaries, and at times things like audio and graphic files. Therefore, my personal way to do it is to put the source code under version control, along with general small sized graphics, and leave any binaries to other systems of management. If it is a binary that I created myself using the build system of the IDE, then that can definitily be ignored, because it is going to be regenerated every build. For dependancy libraries, well this is where dependancy package managers come in.
As for IDE generated files (I am assuming these are ones that aren't generated during the build process, such as the solution files for Visual Studio) - well, I think it would depend on whether or not you are working alone. If you are working alone, then go ahead and add them - they will allow you to revert settings in the solution or whatever you make. Same goes for other non-solution like files as well. However, if you are collaborating, then my recomendation is no - most IDE generated files tend to be, well, user specific - aka they work on your machine, but not neccesarily on others. Hence, you may be better of not including IDE generated files in that case.
tl;dr you should put most things that relate to your program into version control, excluding dependencies (things like libraries, graphics and audio should be handled by some other dependancy management system). As for things directly generated by the IDE - well, it would depend on if you are working alone or with other people.