Is there Any way to compile only one particular package on server? - oracle10g

Oracle server is not so smart to catch updated packages.
so whenever I recompile my package. it throws existing package invalidate error.
if is there any way so that I can refresh only my package on server. So I don't need to bounce server and stop server which is used by everyone.

Minimize public functions, procedures, and variables. Invalid objects and ORA-04068: existing state of packages has been discarded errors will decrease after minimizing public objects and minimizing the package state.
Any function or procedure in a package specification is public. Changing those functions and procedures may change the way other objects use the package, causing invalidations. Also, public APIs should be thoroughly documented and tested. Which means you want to have as few public APIs as possible. For some reason most Oracle programs unnecessarily put all their procedures and functions in the specification, when most of them are only needed in the body. This change will make your programs better, shrink dependencies, and minimalize invalid objects.
Any variable in the package specification is also public, and will maintain its value for the duration of the session. As in any language, public variables should also be minimized. For some reason most Oracle programs also unnecessarliy put many variables in the specification, instead of the body. If there are no variables in the specification, there is no package state and you won't see ORA-04068.
Do not develop on shared systems. Give every developer an infinite number of databases and merge changes in version-controlled text files. There are many easy and cheap ways to get an infinite number of databases - databases installed locally, virtual machines, containers, etc. There are also many easy and cheap ways to version control text files - every modern IDE can open and save files to the filesystem, then use something like Git or SVN.
Having multiple developers work on a single shared system simply does not scale. Other than "it was slightly easier to set it up that way", there is literally no good reason to develop that way anymore.
Compile body, not specification. As Littlefoot suggested, try to change the body instead of the specification. (Although dependencies were less granular in 10g, this may not help as much as it would in modern versions.)

If you modified the package (let's call it PKG_TEST), both specification and body, it is compiled with
alter package pkg_test compile;
If specification was changed, it might have caused other dependent objects to become invalid (that's probably what you saw).
However, if you modify only package body, you don't have to compile specification (as it didn't change) but only body:
alter package pkg_test compile body;
Whatever changes you did to the body, they won't invalidate other objects. So, pick one of those commands, depending on what you did to that package.

Related

How to make "prereqs" of CPAN::Meta::Spec require a distribution instead of a package?

I'm researching about how to package some of my Perl apps and better manage their dependencies to make distribution easier for me and my customers, which most likely doesn't include uploading to CPAN at all. Instead, I would provide custom repos if necessary or, more likely, access to SCMs like Subversion.
CPAN::Meta::Spec seems to provide what I need to describe my apps, their dependencies and even where to get them from, but what I'm wondering is about the level of detail of pre-requisites. The spec contains the following sentence:
The set of relations must be specified as a Map of package names to version ranges.
Requiring packages seems a little too low level for my needs, I would prefer requiring distributions instead. Pretty much the level (from my understanding) tools like Maven and Gradle work at, e.g. Apache Commons Lang vs. Apache Commons IO etc. instead of individual classes like org.apache.commons.lang3.AnnotationUtils or org.apache.commons.io.ByteOrderMark. OTOH, the example in the docs contains the following lines:
requires => {
'perl' => '5.006',
'File::Spec' => '0.86',
'JSON' => '2.16',
},
The line containing perl doesn't look like a package to me and I didn't find some package perl or perl.pm anywhere on my system. Seems to me like that is handled differently to the other things of the example.
I have a system wide folder containing e.g. some utility packages, which seems comparable to some abstract perl to me. That folder should get defined as one distribution, maintain a version number for all of the packages in that folder and therefore should allow other apps to require that whole thing. If I understand the docs correctly, I would need to create not only the META.yml in the folder, but additionally some e.g. sysutils.pm containing package sysutils; and defining some version.
Is there some way to avoid creating that file and really require the distribution itself only?
The META.yml already contains a name and version on it's own, so looks like some abstract thing one could require in theory. I don't see the need of adding an additional .pm-file representing the distribution itself only to allow require to work. It wouldn't contain any business logic in my case.
Thanks!
That's really not what you want to do. You want to pre-req what you actually require. So, for example, if you need File::Spec, that's what you need, regardless of whether it comes from perl core or from a separate CPAN distribution.
I've seen cases where certain modules have moved from CPAN to core, or vice versa. By requiring the module directly, you don't need to ship new releases of your dependent distributions simply because someone you depend on changed their method of distribution.
I've also seen cases where certain modules are split off from their original distributions when it was determined they were valuable as standalone modules. Depending on the module means that you no longer drag in a bunch of other modules for a simple dependency.
What you're more or less looking for is akin to the Task::* modules. No real logic in most of them, just a list of further dependencies.
The Perl dependency system works entirely on package names, on multiple levels. When a CPAN distribution is uploaded, each package within is indexed by PAUSE, which also checks if the uploader has permissions for that package and that the package has a newer version than the currently indexed package. None of these checks care about the distribution as a whole (though the indexer does do other checks at that level).
Then, when a CPAN client sees a dependency, or you tell it to install something, it checks the index for that package name, which tells it what distribution release to install. If it depends on a certain version, that is checked against the $VERSION declared in that package if you have it installed; whereas once a distribution is installed, its "version" is no longer tracked. The distribution level is almost entirely meaningless except that it is what is ultimately downloaded and installed to satisfy these dependencies. This is important, because modules can and do move between distributions, maintaining their version increments, and the package index will always tell you which distribution to get the version you need.
As you noticed, the perl dependency is weird. It's a special case that has been there forever, as a convention to declare what version of Perl you require, you declare a runtime requirement of perl. It is not an indexed module, and every CPAN client and other consumer of CPAN metadata special cases this to either ignore it or treat it as a minimum Perl version, rather than something that can be installed. There's no way to extend this to work for distributions in general, and it would be a bad idea to try.
As an additional note, the CPAN meta spec is a specification for the file named META.json included in CPAN distributions (META.yml is the legacy version), but this file is automatically generated by your authoring tool. It should never be manually created, though you may have your authoring tool manually add certain keys (in which case reading the spec is important), including prereqs. See neilb's blog post for how to specify dependencies for various authoring tools, which will then transpose these into the generated META file, and also how to use cpanfiles to specify dependencies in general.

Idiomatic Rust plugin system

I want to outsource some code for a plugin system. Inside my project, I have a trait called Provider which is the code for my plugin system. If you activate the feature "consumer" you can use plugins; if you don't, you are an author of plugins.
I want authors of plugins to get their code into my program by compiling to a shared library. Is a shared library a good design decision? The limitation of the plugins is using Rust anyway.
Does the plugin host have to go the C way for loading the shared library: loading an unmangled function?
I just want authors to use the trait Provider for implementing their plugins and that's it.
After taking a look at sharedlib and libloading, it seems impossible to load plugins in a idiomatic Rust way.
I'd just like to load trait objects into my ProviderLoader:
// lib.rs
pub struct Sample { ... }
pub trait Provider {
fn get_sample(&self) -> Sample;
}
pub struct ProviderLoader {
plugins: Vec<Box<Provider>>
}
When the program is shipped, the file tree would look like:
.
├── fancy_program.exe
└── providers
├── fp_awesomedude.dll
└── fp_niceplugin.dll
Is that possible if plugins are compiled to shared libs? This would also affect the decision of the plugins' crate-type.
Do you have other ideas? Maybe I'm on the wrong path so that shared libs aren't the holy grail.
I first posted this on the Rust forum. A friend advised me to give it a try on Stack Overflow.
UPDATE 3/27/2018:
After using plugins this way for some time, I have to caution that in my experience things do get out of sync, and it can be very frustrating to debug (strange segfaults, weird OS errors). Even in cases where my team independently verified the dependencies were in sync, passing non-primitive structs between the dynamic library binaries tended to fail on OS X for some reason. I'd like to revisit this, find what cases it happens in, and perhaps open an issue with Rust, but I'm going to advise caution with this going forward.
LLDB and valgrind are near-essential to debug these issues.
Intro
I've been investigating things along these lines myself, and I've found there's little official documentation for this, so I decided to play around!
First let me note, as there is little official word on these properties please do not rely on any code here if you're trying to keep planes in the air or nuclear missiles from errantly launching, at least not without doing far more comprehensive testing than I've done. I'm not responsible if the code here deletes your OS and emails an erroneous tearful confession of committing the Zodiac killings to your local police; we're on the fringes of Rust here and things could change from one release or toolchain to another.
I have personally tested this on Rust 1.20 stable in both debug and release configurations on Windows 10 (stable-x86_64-pc-windows-msvc) and Cent OS 7 (stable-x86_64-unknown-linux-gnu).
Approach
The approach I took was a shared common crate both crates listed as a dependency defining common struct and trait definitions. At first, I was also going to test having a struct with the same structure, or trait with the same definitions, defined independently in both libraries, but I opted against it because it's too fragile and you wouldn't want to do it in a real design. That said, if anybody wants to test this, feel free to do a PR on the repository above and I will update this answer.
In addition, the Rust plugin was declared dylib. I'm not sure how compiling as cdylib would interact, since I think it would mean that upon loading the plugin there are two versions of the Rust standard library hanging around (since I believe cdylib statically links the Rust stdlib into the shared object).
Tests
General Notes
The structs I tested were not declared #repr(C). This could provide an extra layer of safety by guaranteeing a layout, but I was most curious about writing "pure" Rust plugins with as little "treating Rust like C" fiddling as possible. We already know you can use Rust via FFI by wrapping things in opaque pointers, manually dropping, and such, so it's not very enlightening to test this.
The function signature I used was pub fn foo(args) -> output with the #[no_mangle] directive, it turns out that rustfmt automatically changes extern "Rust" fn to simply fn. I'm not sure I agree with this in this case since they are most certainly "extern" functions here, but I will choose to abide by rustfmt.
Remember that even though this is Rust, this has elements of unsafety because libloading (or the unstable DynamicLib functionality) will not type check the symbols for you. At first I thought my Vec test was proving you couldn't pass Vecs between host and plugin until I realized on one end I had Vec<i32> and on the other I had Vec<usize>
Interestingly, there were a few times I pointed an optimized test build to an unoptimized plugin and vice versa and it still worked. However, I still can't in good faith recommending building plugins and host applications with different toolchains, and even if you do, I can't promise that for some reason rustc/llvm won't decide to do certain optimizations on one version of a struct and not another. In addition, I'm not sure if this means that passing types through FFI prevents certain optimizations such as Null Pointer Optimizations from occurring.
You're still limited to calling bare functions, no Foo::bar because of the lack of name mangling. In addition, due to the fact that functions with trait bounds are monomorphized, generic functions and structs are also out. The compiler can't know you're going to call foo<i32> so no foo<i32> is going to be generated. Any functions over the plugin boundary must take only concrete types and return only concrete types.
Similarly, you have to be careful with lifetimes for similar reasons, since there's no static lifetime checking Rust is forced to believe you when you say a function returns &'a when it's really &'b.
Native Rust
The first tests I performed were on no custom structures; just pure, native Rust types. This would give a baseline for if this is even possible. I chose three baseline types: &mut i32, &mut Vec, and Option<i32> -> Option<i32>. These were all chosen for very specific reasons: the &mut i32 because it tests a reference, the &mut Vec because it tests growing the heap from memory allocated in the host application, and the Option as a dual purpose of testing passing by move and matching a simple enum.
All three work as expected. Mutating the reference mutates the value, pushing to a Vec works properly, and the Option works properly whether Some or None.
Shared Struct Definition
This was meant to test if you could pass a non-builtin struct with a common definition on both sides between plugin and host. This works as expected, but as mentioned in the "General Notes" section, can't promise you Rust won't fail to optimize and/or optimize a structure definition on one side and not another. Always test your specific use case and use CI in case it changes.
Boxed Trait Object
This test uses a struct whose definition is only defined on the plugin side, but implements a trait defined in a common crate, and returns a Box<Trait>. This works as expected. Calling trait_obj.fun() works properly.
At first I actually anticipated there would be issues with dropping without making the trait explicitly have Drop as a bound, but it turns out Drop is properly called as well (this was verified by setting the value of a variable declared on the test stack via raw pointer from the struct's drop function). (Naturally I'm aware drop is always called even with trait objects in Rust, but I wasn't sure if dynamic libraries would complicate it).
NOTE:
I did not test what would happen if you load a plugin, create a trait object, then drop the plugin (which would likely close it). I can only assume this is potentially catastrophic. I recommend keeping the plugin open as long as the trait object persists.
Remarks
Plugins work exactly as you'd expect just linking a crate naturally, albeit with some restrictions and pitfalls. As long as you test, I think this is a very natural way to go. It makes symbol loading more bearable, for instance, if you only need to load a new function and then receive a trait object implementing an interface. It also avoids nasty C memory leaks because you couldn't or forgot to load a drop/free function. That said, be careful, and always test!
There is no official plugin system, and you cannot do plugins loaded at runtime in pure Rust. I saw some discussions about doing a native plugin system, but nothing is decided for now, and maybe there will never be any such thing. You can use one of these solutions:
You can extend your code with native dynamic libraries using FFI. To use the C ABI, you have to use repr(C), no_mangle attribute, extern etc. You will find more information by searching Rust FFI on the internets. With this solution, you must use raw pointers: they come with no safety guarantee (i.e. you must use unsafe code).
Of course, you can write your dynamic library in Rust, but to load it and call the functions, you must go through the C ABI. This means that the safety guarantees of Rust do not apply there. Furthermore, you cannot use the highest level Rust's functionalities as trait, enum, etc. between the library and the binary.
If you do not want this complexity, you can use a language adapted to expand Rust: with which you can dynamically add functions to your code and execute them with same guarantees as in Rust. This is, in my opinion, the easier way to go: if you have the choice, and if the execution speed is not critical, use this to avoid tricky C/Rust interfaces.
Here is a (not exhaustive) list of languages that can easily extend Rust:
Gluon, a functional language like Haskell
Dyon, a small but powerful scripting language intended for video games
Lua with rlua or hlua
You can also use Python or Javascript, or see the list in awesome-rust.

TypeScript : Can't reference a static field in a certain context [duplicate]

I'm in the process of moving a fairly large typescript project from internal modules to external modules. I do this because I want to create one core bundle, which, if and when required, can load other bundles. A seccond requirement which I'm keeping in mind is that I'd like to be able to run the bundles (with some modifications if required) on the server with nodeJS aswell.
I first tried using AMD & require.js to build the core bundle, but I came across an issue with circular dependencies. After having reading that this is common with require.js and commonJS is more often adviced for large project I tried that. But now that it's set up together with browserify I have the exact same issue coming up at the exact same place when I run the compiled bundle.
I have something like 10 base classes that havily rely on eachother and form multiple circular dependencies. I don't see any way to remove all of them.
A simplified setup to explain why I don't think I can remove the circular dependencies:
Triples are made of 3 Resources (subject,predicate,object)
Resource has TripleCollections to keep track of which triples its used in
Resource has multiple functions which rely on properties of Triple
Triple has some functions which handle TripleCollections
TripleCollection has multiple functions which uses functions of Triple
TripleCollection.getSubjects() returns a ResourceCollection
ResourceCollection.getTriples() returns a TripleCollection
Resource keeps track of the objects of its triples in ResourceCollections
ResourceCollection uses multiple functions of Resource
I've read multiple related issues here on SO (this one being most helpful), and from what I can gather I only have a few options:
1) Put all the base classes with circular dependencies in 1 file.
This means it will become one hell of a file. And imports will always need aliases:
import core = require('core');
import BaseA = core.BaseA;
2) Use internal modules
The core bundle worked fine (with its circular dependencies) when I used internal typescript modules and reference files. However if I want to create separate bundles and load them at run time, I'm going to have to use shims for all modules with require.js.
Though I don't really like all the aliasing, I think I will try option 1 now, because if it works I can keep going with CommonJS and browserify and later on I can also more easily run everything on the server in node. And I'll have to resort to option 2 if 1 doesn't work out.
Q1: Is there some possible solution I havn't mentioned?
Q2: what is the best setup for a typescript project with 1 core bundle which loads other bundles (which build upon the core) on demand. Which seems to cannot go around circular dependencies. And which preferably can run both on the client and the server.
Or am I just asking for the impossible? :)
Simply put (perhaps simplistically, but I don't think so), if you have ModuleA and ModuleB and both rely on each other, they aren't modules. They are in separate files, but they are not acting like modules.
In your situation, the classes are highly interdependent, you can't use any one of those classes without needing them all, so unless you can refactor the program to try an make the dependencies one-way, rather than two-way (for example), I would treat the group of classes as a single module.
If you do put all of these in a single module, you may still be able to break it into modules by moving some of the dependencies, for example (and all of these questions are largely rhetorical as they are the kind of question you would need to ask yourself), what do Triple and Resource share on each other? Can that be moved into a class that they could both depend on? Why does a Triple need to know about a TripleCollection (probably because this represents some hierarchical data)? There may only be some things you can move, but any dependency removed from this current design will reduce complexity. For example, if the two-way relationship between Triple and Resource could be removed.
It may be that you can't substantially change this design right now, in which case you can put it all in one module and that will largely solve the module loading issue and it will also keep code together that is likely to change for the same reason.
The summary of all of this is:
If you have a two way module dependency, make it a one-way module dependency. Do this by moving code to make the dependency one way or by moving it all into a single larger module that more honestly represents the coupling between the modules.
So my view on your options is slightly different to those in your questions...
1) Try refactoring the code to reduce coupling to see if you can keep smaller modules
Failing that...
2) Put all the base classes with circular dependencies in 1 file.
I wouldn't place the internal modules option on the list - I think external modules are a much better way of managing a large program.

PostgreSQL's plperlu interpreter's #INC and/or cached libraries: separate for different databases?

I have different versions of some libraries I'm developing, and want to, from within various plperl functions I've written, load a certain version based on current_database().
(IIRC using use rather than require is preferred, I think because it might cache the library?)
However, my fear is that different databases on the same server will have problems, in either way that I'm thinking of doing it:
1) use lib and then use--if more than one path gets stuck on #INC, it may not be the right one that gets used
2) require--even if this means the right one is always used in the current script, does it mean that the library gets reloaded every time? And either way, if libraries are kept loaded once used, is it possible that namespace pollution from different versions could result in bugs? (E.g. if I have something branch based on whether a variable is defined, and in one version it is by default, and in another it's not--will all the versions now act as if it is, unless I explicitly undefine it rather than just not defining it?)
If plperl is not loaded through shared_preload_libraries, each database session will have its own interpreter freshly initialized at first use, so the libraries included by one session cannot possibly interfere with another session.
See PL/Perl Under the Hood in the manual for more.

Why would you want to export symbols in Perl?

It seems strange to me that Perl would allow a package to export symbols into another package's namespace. The exporting package doesn't know if the using package already defined a symbol by the same name, and it certainly can't guarantee that it's the only package exporting a symbol by that name.
A very common problem caused by this is using CGI and LWP::Simple at the same time. Both packages export head() and cause an error. I know, it's easy enough to work around, but that's not the point. You shouldn't have to employ work arounds to use two practically-core Perl libraries.
As far as I can see, the only reason to do this is laziness. You save some key strokes by not typing Foo:: or using an object interface, but is it really worth it?
The practice of exporting all the functions from a module by default is not the recommended one for Perl. You should only export functions if you have a good reason. The recommended practice is to use EXPORT_OK so that the user has to type the name of the required function, like:
use My::Module 'my_function';
Modules from way back when, like LWP::Simple and CGI, were written before this recommendation came in, and it's hard now to alter them to not export things since it would break existing software. I guess the recommendation came about through people noticing problems like that.
Anyway Perl's object-oriented objects or whatever it's called doesn't require you to export anything, and you don't not have to say $foo->, so that part of your question is wrong.
Exporting is a feature. Like every other feature in any language, it can cause problems if you (ab)use it too frequently, or where you shouldn't. It's good when used wisely and bad otherwise, just like any other feature.
Back in the day when there weren't many modules around, it didn't seem like such a bad thing to export things by default. With 15,000 packages on CPAN, however, there are bound to be conflicts and that's unfortunate. However, fixing the modules now might break existing code. Whenever you make a poor interface choice and release it to the public, you're committed to it even if you don't like it.
So, it sucks, but that's the way it is, and there are ways around it.
The exporting package doesn't know if the using package already defined a symbol by the same name, and it certainly can't guarantee that it's the only package exporting a symbol by that name.
If you wanted to, I imagine your import() routine could check, but the default Exporter.pm routine doesn't check (and probably shouldn't, because it's going to get used a lot, and always checking if a name is defined will cause a major slowdown (and if you found a conflict, what is Exporter expected to do?)).
As far as I can see, the only reason to do this is laziness. You save some key strokes by not typing Foo:: or using an object interface, but is it really worth it?
Foo:: isn't so bad, but consider My::Company::Private::Special::Namespace:: - your code will look much cleaner if we just export a few things. Not everything can (or should) be in a top-level namespace.
The exporting mechanism should be used when it makes code cleaner. When namespaces collide, it shouldn't be used, because it obviously isn't making code cleaner, but otherwise I'm a fan of exporting things when requested.
It's not just laziness, and it's not just old modules. Take Moose, "the post-modern object system", and Rose::DB::Object, the object interface to a popular ORM. Both import the meta method into the useing package's symbol table in order to provide features in that module.
It's not really any different than the problem of multiply inheriting from modules that each provide a method of the same name, except that the order of your parentage would decide which version of that method would get called (or you could define your own overridden version that somehow manually folded the features of both parents together).
Personally I'd love to be able to combine Rose::DB::Object with Moose, but it's not that big a deal to work around: one can make a Moose-derived class that “has a” Rose::DB::Object-derived object within it, rather than one that “is a” (i.e., inherits from) Rose::DB::Object.
One of the beautiful things about Perl's "open" packages is that if you aren't crazy about the way a module author designed something, you can change it.
package LWPS;
require LWP::Simple;
for my $sub (#LWP::Simple::EXPORT, #LWP::Simple::EXPORT_OK) {
no strict 'refs';
*$sub = sub {shift; goto &{'LWP::Simple::' . $sub}};
}
package main;
my $page = LWPS->get('http://...');
of course in this case, LWP::Simple::get() would probably be better.