InterSystems Cache ObjectScript vs Java as in Web application development [closed] - intersystems-cache

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Recently My Project Manger has asked me to work on InterSystems Cache ObjectScript. Earlier i used to work as Java Developer (J2EE). So my question is how different is Cache from java. Comparison would be great to have.

Caché ObjectScript is very different from Java and has very little in common. It is more like dynamically typed compiled scripting language with meta language built in (class definitions) and with a large number of features you need to know to write the good code. All the code is compiled to a low-level (but pretty readable) so-called routine code and is processed by DBMS Caché and its application engine.
Take for example this reference. As you may notice, there are many weird symbols and structures like $, $$, $$$, ##class, &sql(...), &javascript<...>, #dim, $System, .#, $get, $zu(...), %, ^%, { ... }, ... (this list is big). Some of the language features are very unpredictable from the first glance. For example, function $get(...) looks like a fundtion but silently acts like a try/catch statement, as well as $data and some other system functions.
So prepare to work with InterSystems documentation! Also, recently developed InterSystems community is a great resource. And while Googling, you may find quite a few answers out of the internet, but just keep in mind to search with “intersystems” or “objectscript” keywords. But many things you won’t find there, and in this case you should use InterSystems docs or community to ask the questions. Once you will get used to the language (which for me took over 6 months), you will feel more confident in it.
Also it is worth mention that Caché ObjectScript is literally “dinosaur” language, which involves and upgrades over time. That’s why there are so many different features. Some of them you shouldn’t use anymore: for example, instead of writing code in routine, like people did before OOP concepths were introduced, you should use classes. ObjectScript’s JSON capabilities (ability to write JSON inside ObjectScript) was intoduced just approximately 1 year ago. And you may find a plenty of “prehistoric” code in Caché and should take it normally: it is a really huge ecosystem.
Hope this helps, happy hacking!

Related

Advice on which Excel addin development system to use? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Around 15 years ago, I was writing .xll addins for Excel using the C-API, mostly for spreadsheet formulae (complex and time-consuming calcs), and written in C++ (very much my preferred language). I subsequently did a few things with C# wrappers.
After a long break, during which Excel and Visual Studio have moved on apace, I am once again looking at building an addin to use compiled code for a lengthy optimization, rather than using a VBA script. The optimization involves taking large arrays of data out of worksheets, crunching then and then writing the results back into the workbook (as I want to chart them etc later).
NB. This is for my own use, I am not planning on distributing any solution
In pseudo Excel VBA code what I'd like to have is something like this:
Dim obj as MyOptimizer
Dim rngInputData as Range
obj.SetData(rngInputData)
obj.Optimize
Dim v as vOutputData
v = obj.GetResult
I am a little bewildered by the options available (in no particular order):
code a C-style dll with a VBA xla wrapper
code a C-style dll with an Excel.DNA C# wrapper
code a COM object of some sort (though I am not sure with Visual Studio template to use)
Something else ...
A COM object has its attractions as it will hold state, and I can create an instance of my object without having to write a clunky handle-passing system. The other consideration is the amount of marshalling between different layers (I'll be using a lot of 2-D arrays, though only of doubles), and the type-checking of the inputs.
I'm looking for some guidance at this crossroads, before I go down the rabbit hole of one particular method ... then of course, I'll be back with more questions!
Thanks!
If C++ is still very much your preferred language, you should certainly consider that. It will give the the best performance possible, and leverage your knowledge of the C API. If you can afford it, I would also recommend you have a look at the XLL+ library. While expensive, I think if you look at the features you'll find it brings a lot of value.
I develop Excel-DNA and very much prefer to avoid C++ in favour of the .NET platform.
So, while biased, I think it will be an even better fit for your requirements. It is a vastly easier environment to work in compared to C++ (in my opinion - no XLOPERs!) However, there is some overhead in the marshaling and so you'd have some compromise on the performance. You don't indicate what range sizes or how fast you expect the interchange to happen. With Excel-DNA, I expect you can easily read, process and write a block of a million numbers in about a second (see my answer here: https://stackoverflow.com/a/3868370/44264). Similarly, a small block of 10 cells with numbers can be read and written more than 10,000 times in a second. Things will get a bit slower if you are working with long strings rather than numbers, but you suggest this is not the case.
While you should expect calculation code (apart from the Excel interop) to be very fast with C# and .NET, you can also talk to C libraries from .NET very efficiently. So having core calculation libraries in C and the use the P/Invoke mechanism to interact to them from .NET is a completely viable plan, and you still get the (large) benefit of a more pleasant environment for the Excel parts.
VBA and COM approaches to reading / writing with Excel will not perform quite as well as the Excel-DNA / .NET approach (which also uses the C API for this kind of thing under the hood, rather than COM). Still, when used properly the COM overhead is not terrible, and might not on its own be a showstopper for you.
I am interested in this kind of optimization approach myself, so would be happy to help if you do take the Excel-DNA approach. The best place for Excel-DNA questions is the Excel-DNA Google group.
Getting started with Excel-DNA would look like this:
Download and install the (free) Visual Studio 2019 Community Edition.
Select the Desktop .NET Development workload when installing. You don't need to check the Office options when installing for Excel-DNA add-ins.
Then make a new C# "Class Library (.NET Framework)" project. It's important at this step not to pick ".NET Standard" or ".NET Core" (long story...).
Then in your project install the "ExcelDna.AddIn" package from NuGet.
Read and follow the instructions in the readme that pops up.
Paste in the code snippet from here: https://stackoverflow.com/a/3868370/44264 press F5 and test in Excel.
After the slow Visual Studio install, it will only take a few minutes, and you'll get some idea of what's involved.

Code as System image (serialized run-time environment) vs Source (text) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Almost all conventional languages today represent programmers intention as text source, which is then (lets say for sake of simplicity) translated to some bytecode/machine code and interpreted/executed by a VM/CPU.
There was another technique, which, for some reason, isn't that popular theese days: "freeze" the run-time of your VM and dump/serialize the environment (symbol bindings, state, code (whatever that is)) into an image, which you can then transfer, load and execute.
Consequentially, you do not "write" your code in a usual way, but you modify the environment with new symbols, while in "run-time".
I see great advantages to this technique:
Power-boosted REPL: you can introspect your code as you write it, partially evaluate it, test it directly and see the effects of your changes. Then roll back if you've messed up and do it again, or commit it to the environment finally. No need for long compile-run-debug cycle;
Some of the usual problems about dynamic languages (that they cannot be compiled, as compiler cannot reason about environments statically) are obliviated: the interpreter knows where what is located and can subsitute symbol references with static offsets and do other optimizations;
It's easier on programmer's brain: you "offload" different contextual information about the code from your head, i.e. you don't need to keep track about what your code has already done to some variable/data structure or which variable holds what: you see it directly in front of your eyes! In the usual way (writing source), programmers add new abstractions or comments to the code to clarify intents, but this can (and will) get messy.
The question is: what are disadvantages of this approach? Is there any serious critical disadvantage that I am not seeing? I know, there are some problems with it, i.e.:
try building a module system with it, that will not result in dependancy hell or serious linkage problems
security issues
try to version-control such images and enable concurrent development
But these are, IMHO, solvable with a good design.
EDIT1: concerning status "closed,primarily opinion-based". I've described two existent approaches and it is clear and obvious that one is preferred over another. Whether the reasons for that are purely "opinion-based" or there is a reasearch to back this up, is unknown to me, but even if they are opinion-based, if someone would list these reasons for such an opinion to develop, it should, actually, answer my question.
As a daily user of smalltalk, I've to say I haven't found any fundamental disadvantages and have to agree that there are lots of advantages.
It makes metaprogramming, reasoning about your program easy, and much better supports refactoring and code rewriting.
It requires/develops a different way of looking at your code, though. Smalltalk has little to offer to developers who are not interested in abstraction

If you read an LGPL projects source code and that inspires an entirely different implementation, is that work still a derived work? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I really want to use PGM for an application that I'm working on for one my companies. That application will never be distributed, it's for internal use only. There is an implementation called OpenPGM and a (I believe) derivative work javapgm that implements the protocol. Both are licensed under the LGPL.
My question is if I read the source code for these libraries and use that knowledge to help create an Erlang PGM implementation, would that be considered a derived work? I would prefer to release my implementation under the BSD license, so I'm not trying to take something for nothing, but I want to play fair.
In short then:
Would / should my version be released under the LGPL?
If my company is using it internally only, would there be any restrictions on how it could use that library? (it would never be distributed outside the company).
Is it in the spirit of the LGPL license to do what I want to do?
Thanks in advance! :)
I don't think it would be a derived work unless there is a 1:1 correlation between lines of code in your thing and the open source code. We're not talking about a patent here, where the concept of the invention is important.
If it is only used internally then it doesn't have to be.
You could never be certain that it doesn't accidentally leak out or get shared or included in another project.
You should try to work with OpenPGM to make the Erlang interface that you need; then it is open source, other people may help maintain it for you, you get a free code review

What are the Pros and Cons of ICU? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
My team is tasked with implementing Unicode in our software, which is well over a million lines of code. We support an MFC Client and a Server on Windows, AIX or Solaris with an Oracle or SQL Server database. ICU looks like a very helpful tool. What are the pros and cons of using ICU? Does ICU work as advertised without major bugs?
A data point: Our (yes, that's a disclaimer) list of users and bugs is all on our project site.
IMBO (biased):
Pros:
works as advertised, comprehensive.
Mature: 10+ years now, with a good stability policy and very active development.
Uses latest Unicode+CLDR+BCP47+other standards.
Compiles basically everywhere. C/C++/J and called by/implements python,perl,php,…
Open source, with an increasing diversity of contributors.
Comes with all needed data for the above (see below, under cons), yet customizable. (can add custom data)
Cons:
Needs better documentation (we try- anyone want to help?).
Lots of APIs- "it's too big #1" hard to know which one to use, even if it does what you want.
Used by lots of types of programs, from embedded devices, smartphones through major desktop apps through databases and operating systems and enterprise apps: So, there may be multiple ways to do something.
Comes with all needed data for the above! "it's too big #2" (see above, under pros), yet customizable. (can be trimmed down to size)
ICU is terrible: avoid if at all possible.
Despite its age, basic things in it are broken, for example in this question: Fixing regex to work around ICU/RegexKitLite bug
Time handling is broken as times are underspecified: you can't distinguish a DST from a non-DST time in a reliable way in many APIs.
It's freaking huge.
The documentation needs a lot of work. Less-used features are often unusable because there's no way to figure out the right way to use them. I spent days trying to get transliteration to work as explained and eventually gave up.
It likes to work in UTF-16, the worst of all possible worlds.
Support is unresponsive to problems.
In my experience, it's not until you're most of the way through a project that you begin to discover the insidious flaws that will take 90% of your time.
For many people, there is no alternative so you're stuck with it.

What is missing in Objective-C that you don't want to program with it [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Now that Apple relaxed the restrictions on developer tools/programs, I wonder what tempts developers to other languages than Apple offers by default, Objective-C, which is quite fun to program with. What missing feautures makes you not to program with it but something else?
Lack of Objective-C expertise or a large/complex code base in another language would be among common reasons.
Cross-platform coding might well be another.
I haven't done any iPhone development yet, but generally speaking, here's a few reasons:
Cross-platform development
The other language suits your coding style better
The other language is a better tool for the job
You are comfortable in the other language and don't have the time / budget / motivation to learn Objective-C
Existing libraries / codebase
Specific tools you might want to use
Testing some concepts in Objective-C can sometimes be kind of tedious to set up. Sometimes you just want to see how a single method works or play around with an object's functionality to see how it works.
Setting up a new project is somewhat tedious, and it's not always feasible to incorporate the test code in to a new project.
In this case, I do one of two things:
Keep an empty project around specifically for testing things
Drop down to the Terminal and use irb (or PyObjC) to play with the objects in Ruby or Python.
In a nutshell, the thing that's missing is the ability to use Objective-C in an interpreted manner. You have to use another language (like Ruby or Python) to do this.
I recently wrote some networking code in Python, then had to translate it into Objective-C for use on the iPad. A typical line of clear Python would become five or ten lines of busy-work C. I just work faster in higher-level lanugages; the language puts up less resistance, requires fewer forms to be filled out.
I have ported a couple of tiny language interpreters (for my own use, not for App store distribution) to the iPhone. This allows me to write short snippets of code on the road, without having to carry my Mac, and run them locally. I don't know of any small Objective C interpreters, and the language is not really designed for interactive use.