how to assess the quality of CPAN Perl modules? [closed] - perl

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
In CPAN, a huge amount of Perl modules are available.
Which is your favourite way / method to assess the quality of a Perl module ? --- in prior to a manual test.
It seems like a rather easy question but it implies an impact. Many modules are buggy and time wasters. And it is not my intention to name a few of them to avoid an unnecessary defamation.

MetaCPAN Metrics: Rating, Bugs, Last Updated
In order to get easy access to these, try searching on metacpan.org instead of cpan.org. It displays number of open bug reports and average rating on the left-hand side, as well as telling you when the module was last updated and gives you an overview of the speed at which changes are made to the module.
You can also easily see a list of dependencies on the right, so you can look for any modules in the groundwork which are less-than-stellar. It doesn't give you any data you can't get on cpan.org, but it does put it all in one place.
Obviously, if you're working on critical infrastructure, nothing is going to replace an old-fashioned code review as you need to be confident not only in the quality of your program but in your understanding of the plumbing, but those are the metrics I usually look at first.

Related

Is Pyvista a complete library already? [January 2021] [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I was searching for new visualization libraries in Python, and I learned about the existence of Pyvista. The examples I've seen look amazing.
I used to work with Mayavi a few years ago, but Pyvista seems very tempting. Are the features in Pyvista really as well-rounded as traditional libraries like Mayavi, etc.?
The question can seem opinion-based, but sometimes new libraries need a few years to become complete. I learned this the hard way with some CAD packages.
I intend to use Mayavi to post-process results from CFD simulations (computational fluid dynamics). Please let me know your opinions.
Thanks in advance,
As PyVista is just a wrapper for VTK it's safe to say that you are not restricted in what is possible.
I found the move from Mayavi to Pyvista really straight forward. The library has a couple of functions that allows you to convert your existing data structure into VTK friendly structures quite easily.
And if you were a fan of Mayavi's pipeline work there is something similar in PyVista where you can add on different things to do plotter.
I hope that helps. Also, join the slack channel if you want to ask any questions / see what other people are using it for.

Which one is best to consume Restful WebServices for Xamarin.Forms? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I want to go with Xamarin.Forms project. Now, I am bit confuse for consuming Rest API for this project. Performance matters.
There are many available but can any body please suggest me which should be best for Xamarin.Forms(.Net Standard)?
Microsoft Http Libraries or third party libraries like Refit, RESTSharp, PortableRest, etc.
Please suggest
All of these options are viable. I think the performance differences between these libraries will be marginal. So, it mostly comes down to what you feel comfortable with.
I like to use Refit because it will take a lot of redundant code out of your hands and you just have to focus on the contract. All the code for the actual calls is generated at compile-time (and thus won't impact your performance at runtime).
Also have a look at how well the library is maintained and if it's active. If you choose one that is already inactive for a while, chances are that you will start relying on older software versions which might not be what you want.

Understanding perl use structures [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am trying to understand some messy Perl where there a lot of modules that use each. We end up having everything very intertwined and in some cases, global variables are used without using what defines them (for example, A use B, B use C, C defines a variable X which A uses). It is very difficult to refactor the code this way. Are there any methods that will help me understand the structure of the code and what uses what? For what it's worth, we already use strict.
Sorry, but there is no Perl module or other generally available script that can do what your looking for. I had a similar problem once and had to write my own scripts (possibly using NYTProf eg) to parse each module and then analys the collected data externally.
Other than that you're left with commenting out all "use" statements then bring them back in one at a time, only after having done this process with any modules you bring in. A touch job, but you'll learn a lot about your code.
When you start to make changes expect unexpected failures because when you have circular use statements like this, the compilation depends on the order that code is introduced - meaning compilation errors in totally unrelated modules.
Good luck.

What perl web framework to use for the old CGI based perl code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Yes, while i'm working on node.js, i still love perl, :)
The old web product is based on old perl CGI, i'm looking to the simplest way to fix XSS/Sql injection/etc. web security holes, within a week including testing, :(
So for
Catalyst
Dancer
Mason
Maypole
Mojolicious
which one should i use in the ARM platform ?
Thank you !
You have fallen foul of the primarily opinion-based off-topic categorisation, and your question will probably be closed very soon. However I think it's worth offering a few guidelines here
First of all you should absorb what is written in CGI::Alternatives as it is a reasonable summary of the subject
Next you should separate the HTML generation functionality of your existing CGI code from the interface itself, and consider replacements for each of them separately. If you were to use HTML::Tiny together with CGI::Simple then your code would have to change very little and you would have achieved better partitioning of functionality
Ideally you will move on to one of the many templating systems such as Template Toolkit, together with one of the frameworks, which is the topic of your question. In the end you will need to do a lot of research and many trials to discover how well each framework fits your requirement, in terms of both the feature list and the convenience and clarity of the API
All I can do here is say that I am very fond of the Mojolicious suite and suggest that it may be a good starting point. The API focuses on command chaining in a way similar to Ruby, and there is a Mojolicious::Plugin::CGI accessory which will allow you to execute CGI scripts unchanged during your migration
Note however that all of the frameworks that you mention, as well as several others, will have their proponents. That is why you must make the selection yourself, as such recommendations will be influenced primarily by familiarity, and without your own knowledge of the requirements of your project
Unfortunately I cannot speak to the security issues of the various options, but I hope that has helped a little

What are the benefits of using a tool like Chef vs. using a makefile/shell script for deployment? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have heard good things about Chef, was curious about all of the benefits before I devote time to learning a new tool. Not looking to turn this into opinion thread, looking for a list of additional features it has over makefile/shell script.
Chef, and Ansible/Puppet/Salt too (collectively called CAPS), are all based on the structure of "describe the desired state of the system and the tool will make it happen".
A script or Makefile is generally a procedural system, run this, then run that, etc. That means you need to keep a mental model of system from each step to the next, and if that ever deviates from the real system (ex, a directory you are trying to set the owner of doesn't exist) your script usually breaks.
With some stuff this is easy, like yum/apt-get install as they are internally idempotent, you can run them every time and if the package is already installed, it just does nothing.
CAPS systems take that principle (idempotence) and apply it to all management tasks. This has for the most part resulted in less brittle configuration management as you only need to tell the tool what the end result should look like and it will take care of figuring out the delta from the current state.