I just started using puppet. I don't know how to execute classes in puppet.
I've my files "config.pp init.pp install.pp service.pp".
For example install.pp :
class sshd::install{ ... }
Next, i declare my class in init.pp with "include sshd::install".
I also tried to run classes with :
class{'sshd::install':} -> class{'sshd::config':} ~> class{'sshd::service':}
After that, i launch "puppet apply init.pp" but nothing.
My scripts work individualy, but with classes i don't know how to execute all my classes.
Thanks
I'm not sure how much research you've done into Puppet and how its code is structured, but these may help:
Module Fundamentals
Digital Ocean's guide.
It appears that you are starting out with a basic module structure (based on your use of init/install/service), which is good, however your execution approach is that of a direct manifest (Not the module itself) which won't work within the module you are testing due to autoloading unless your files are inside a valid module path.
Basically: You want to put your class/module structured code within Puppet's module path (puppet config print modulepath) then you want to use another manifest file (.pp) to include your class.
An example file structure:
/etc/puppetlabs/code/modules/sshd/manifests/init.pp
install.pp
service.pp
/tmp/my_manifest.pp
Your class sshd(){ ... } code goes in the init.pp, and class sshd::install(){ ... } goes in install.pp etc...
Then the 'my_manifest.pp' would look something like this:
include ::sshd
And you would apply with: puppet apply /tmp/my_manifest.pp.
Once this works, you can learn about the various approaches to applying manifests to your nodes (direct, like this, using an ENC, using a site.pp, etc... Feel free to do further reading).
Alternatively, as long as the module is within your modulepath (as mentioned above) you could simply do puppet apply -e 'include ::sshd'
In order to get the code that you have to operate the way you are expecting it to, it would need to look like this:
# Note: This is BAD code, do not reproduce/use
class sshd() {
class{'sshd::install':} ->
class{'sshd::config':} ~>
class{'sshd::service':}
}
include sshd
or something similar, which entirely breaks how the module structure works. (In fact, that code will not work without the module in the correct path and will display some VERY odd behavior if executed directly. Do not write code like that.)
Related
I've done some searching on this, but I cannot find info. I'm building an application inside sinatra, and using the coffeescript templating engine. By default the compiled code is wrapped as such:
(function() {
// code
}).call(this);
I'd like to remove that using the --bare flag, so different files can access classes and so forth that I'm defining. I realize that having it more contained helps against variable conflicts and so forth, but I'm working on two main pieces here. One is the business logic, and arrangement of data in class structures. The other is the view functionality using raphaeljs. I would prefer to keep these two pieces in separate files. Since the two files wrapped as such cannot access the data, it obviously won't work. However, if you can think of a better solution than using the --bare option, I'm all ears.
Bare compilation is simply a bad practice. Each file should export to the global scope only the public objects that matter to the rest of your app.
# foo.coffee
class Foo
constructor: (#abc) ->
privateVar = 123
window.Foo = Foo # export
Foo is now globally available. Now if that pattern isn't practical, maybe you should rethink your structure a bit. If you have to export too may things, you nest and namespace things better, so that more data can be exposed through fewer global variables.
I support Alex's answer, but if you absolutely must do this, I believe my answer to the same question for Rails 3.1 is applicable here as well: Put the line
Tilt::CoffeeScriptTemplate.default_bare = true
somewhere in your application.
I am used to using (ctrl+click) on Eclipse and following variables/objects to look at the definition in order to understand the code.
I just started my first job and I only have access to unix (vi or gvim). Is it possible to do what I'm looking for?
edit: What I mean by is it possible? Lets say class foo is defined in file foo.hpp and is instantiated in foo.cpp. I want to be able to reach the definition of class foo from any instantiation of it in foo.cpp
With Vim you can use tags files generated by exuberant-ctags and other compatible programs.
"Tags" are function and variables, their name, signature and kind are stored alongside their location in files that Vim is able to parse to allow you to navigate through your code.
:help tags will tell you all you need to know.
How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.
I am totally new to Perl/Fastcgi.
I have some pm-modules to which will have to add a lot of scripts and over time it will grow and grow. Hence, I need a structure which makes the admin easier.
So, I want to create files in some kind of directory structure which I can include. I want the files that I include will be exaclty like if the text were written in the file where I do the include.
I have tried 'do', 'use' and 'require'. The actual file I want to include is in one of the directories Perl is looking in. (verified using perl -V)
I have tried within and outside BEGIN {}.
How do I do this? Is it possible at all including pm files in pm files? Does it have to be pm-files I include or can it be any extension?
I have tried several ways, included below is my last try.
Config.pm
package Kernel::Config;
sub Load {
#other activities
require 'severalnines.pm';
#other activities
}
1;
severalnines.pm
# Filter Queues
$Self->{TicketAcl}->{'ACL-hide-queues'} = {
Properties => {
},
PossibleNot => {Ticket => { Queue =>
['[RegExp]^*'] },
},
};
1;
I'm not getting any errors in the Apache's error_log related to this. Still, the code is not recognized like it would be if I put it in the Config.pm file.
I am not about to start programming a lot, just do some admin in a 3rd party application. Still, I have searched around trying to learn how it works with including files. Is the severalnines.pm considered to be a perl module and do I need to use a program like h2xs, or similar, in order to "create" the module (told you, totally newbie...)?
Thanks in advance!
I usually create my own module prefix -- named after the project or the place I worked. For example, you might put everything under Mu with modules named like Mu::Foo and Mu::Bar. Use multiple modules (don't try to keep everything in one single file) and name your modules with the *.pm suffix.
Then, if the Mu directory is in the same directory as your programs, you only need to do this:
use Mu::Foo;
use Mu::Bar;
If they're in another directory, you can do this:
use lib qw(/path/to/other/directory);
use Mu::Foo;
use Mu::Bar;
Is it possible at all including pm files in pm files?
Why certainly yes.
So, I want to create files in some kind of directory structure which I can include. I want the files that I include will be exaclty like if the text were written in the file where I do the include.
That's a bad, bad idea. You are better off using the package mechanism. That is, declare each of your module as a separate package name. Otherwise, your module will have a variable or function in it that your script will override, and you'll never, ever know it.
In Perl, you can reference variables in your modules by prefixing it with the module name. (Such as File::Find does. For example $File::Find::Name is the found file's name. This doesn't pollute your namespace.
If you really want your module's functions and variables in your namespace, look at the #EXPORT_OK list variable in Exporter. This is a list of all the variables and functions that you'd like to import into your module's namespace. However, it's not automatic, you have to list them next to your use statement. That way, you're more likely to know about them. Using Exporter isn't too difficult. In your module, you'd put:
package Mu::Foo;
use Exporter qw(import);
our EXPORT_OK = qw(convert $fundge #ribitz);
Then, in your program, you'd put:
use Mu::Foo qw(convert $fundge #ribitz);
Now you can access convert, $fundge and #ribitz as if they were part of your main program. However, you now have documented that you're pulling in these subroutines and variables from Mu::Foo.
(If you think this is complex, be glad I didn't tell you that you really should use Object Oriented methods in your Modules. That's really the best way to do it.)
if ( 'I want the files that I include will be exactly like if the text were written in the file where I do the include.'
&& 'have to add a lot of scripts and over time it will grow and grow') {
warn 'This is probably a bad idea because you are not creating any kind of abstraction!';
}
Take a look at Exporter, it will probably give you a good solution!
Summary :
I have a project using GNU Autotools. I have a pot file. I need to update it. Is there a magical "make" task that run xgettext for me (I'm lazy ?)
Verbose version :
Hi
I am trying to setup a project using GNU autotools and gettext.
I'm trying to follow the 'lazy' path (that is, only writing configure.ac, Makefile.am, and such, and let tools generate the rest for me as much as possible).
I used gettextize once on my package, so I got a package.pot file created, and I derived a fr.po file (I'm trying to translate in french).
I never managed to get my code translated, but I figured out it might be because the code was not in the proper place. The translated string is in a lib instead of a main, and the documentation is quite unclear about what I must do in this case. If my main call a function in a lib, and the function from the lib is using _(). Should I use gettext of dgettext in this case ? My lib is just here for organisation purpose, so I'm okay with using the same domain (only one package.pot file for the whole app).
So, to try something simpler, I moved my string to the main (it's really just a hello world, for the moment). So I need to update the package.pot file, at least, to realize that the string position changed, need I ? In this case, would I use xgettext manually (painfully passing it the list of all interesting cpp files, which will be a pain in the ass when I have more than one file), or is there a 'make whatever' task somewhere that I can run ?
This may look stupid, but I've not been able to find it.
Also, any help on finding why my code is not translated, (anything not in http://www.gnu.org/software/gettext/FAQ.html#integrating_noop) is welcome !
Thanks
PH
Ok, it turns out that :
there is a update-po task in the generated Makefile of the po/ folder, that does just what I want ;
this tasks looks to file referenced in the POTFILES.in file, which I had forgotten to update.
So it was something stupid.