Module 'socket" not found - sockets

Error when run
I created a program in ZeroBrane, and I compiled it using srlua.
However, since it has some usage of socket get requests, it seems like it is looking for files such as socket.dll to be in the same folder
I am aware that there are other questions just like this.
My socket should be somewhere in my lua folder, but I have found core.dll under the socket, which didn't work.
I'm wondering if I'm approaching this right, and looking for a way to make sure the program can find these files. (it seems to be more than one file required)

You need to have socket.lua in a folder that is available through package.path (or have to be packaged together with your script with srlua) and you also need to have socket\core.dll available through package.cpath, as socket.lua does require "socket.core" and expects to find a DLL that implements that.

Related

Force a module (DLL) to be loaded at a specific address

An executable is loaded and run in WinDbg
It loads modules it needs at certain addresses
Breakpoints set/traces retrieved in this session depend on these addresses
When another session is started for the same executable, (either depending the on the code execution path changing dll dependency order, or some indeterministic loader behavior?) the modules are now loaded into different addresses.
It would have been helpful if there was a way to instruct windbg/loader to load the not-yet-loaded modules at given addresses. This would make certain scripts/text-comparisons much easier.
Yes, I do realize that for example, setting breakpoints relative to symbol names should be preferred instead of using fixed addreses, but being able to "reproduce" a reference debugging environment definitely has certain advantages.
Assuming we're dealing with 3rd party DLLs (that I cannot recompile with predefined loading addresses), is there a way to do this?
I was so happy to see .reload command has an address parameter, which looked like it would do exactly what I'm asking. However, even though that command would load the modules, when the program is continued (and the actual dll load is needed), it would go ahead and still load another copy(?) for the same module, and give a warning like:
WARNING: moduleX_1be0000 overlaps moduleX
So it didn't really work like I expected, thus this question!
WinDbg does not load modules (DLLs). The modules are loaded by the executable.
The ld and .reload commands of WinDbg do not load modules, they load symbol information (PDB files).
The process of changing the address of a module is called rebasing. It happens if the base address is not available any more, e.g. in use by a heap already. In that case, you cannot prevent rebasing at all.
One thing that might help is disabling ASLR (address space layout randomization). You can change that setting in a DLL or EXE. It's part of the COFF header:
On Windows 7, there were ways to disable ASLR completely, but it's not recommended to change that setting on a per-system basis just to help you debug a single process.
Another option would be to use rebase.exe of the Windows SDK and change the base address to a virtual address that you think is more likely to be free at the time the DLL is loaded. I never did that myself, but the rebase help says:
If you want to rebase to a fixed address (ala QFE)
use the ##files.txt format where files.txt contains
address/size combos in addition to the filename
so, it sounds possible to define your own address.

How to transfer files with Tramp using scp or rsync

I've read the TRAMP manual and dozens of forums across the web but I couldn't find an answer to this question. I am trying to set up a link in org-mode that transfers a file from a remote server to my local machine (or vice-versa).
According to the manual I have to write something like
/scp:user#host:filepathonremotemachine
and that's it. No specification of where the file should be moved to, which is weird.
I've tried to do it this way and it simply opened the file (as if I was using ssh); tried other combinations also, without any luck.
There is a specific reason for why I am trying to do this with tramp and not a shell:command link. Any help is very welcome
UPDATE
Apparently TRAMP is less useful than what it promises. That leaves me with the shell:command link option. The problem then revolves around avoiding the openssh window that pops out. The closest solution I found was here and it resumes to setting up an ssh-agent. I am not very familiar with this procedure and I would prefer to use the authinfo.gpg authentication method. Do I have this option? Thanks.
Tramp itself offers just alternative implementations of native Emeacs functions. In this sense, it is dumb, as every library, because it doesn't know what the caller wants.
I'm not an org-mode specialist, but could you please show, which kind of link you have in mind? Without any remoteness, just a link which copies a file locally. Replacing local file names with remote ones will be easy then.
I assume, you need something like an external link, evaluating Lisp code. Like
elisp:(copy-file "/path/src" "/path/target")
The following works (for some definition of "works"):
* link to copy a file
[[shell:scp remote.host.com:/path/to/file /tmp][scp]]
But you must have arranged for passwordless login to the remote host beforehand (e.g. ssh-copy-id your public key to the remote): given that, there is no output in the org buffer, no openssh popup, just the standard question from org-mode asking if you really want to execute the shell command and the file is copied quietly to its destination.

How do I convert a PowerShell script with custom modules into a single executable?

I've written a simple script that has multiple custom functions stored as modules. I have done it this way because I was always been told that if your function can be reused by other things then it should be a module and not a .\ source include. I'm starting to think that mantra isn't right in my current scenario. I am trying to convert the script to an single .exe so that I can install it as a windows service.
Probably should acknowledge that I understand why you wouldn't want to include system modules like Active Directory or IIS management for the obvious issue that could lead to but I'm only trying to include custom functions in a single disputable non editable way.
I have used PowerGUI in the past but can't find any valid exe's for that since DELL have removed it and from memory, I don't think I've ever used it with a module.
I've tried PS2EXE-GUI and PS2EXE. Both of these make the exe and everything works fine while the modules exist. However, as soon as I put the exe on a server that hasn't got the Modules deployed to it, it fails to run. I thought the compile followed all the dependencies and included them as part of the build into the single exe? That appears to not be the case.
I've also tried the PowerShell Studio 2018 by Sapien, but based on their forums you can't include modules into the complied exe. Which again feels wrong if they are actually just custom functions, but it's the way they've written it.
I see https://poshtools.com/docs/posh-pro-tools/merge-script/ would possibly do what I need but that's chargeable and it looks like it actually merges all the content back into a single file. Given the time pressure I'm starting to think I'll have to pay if there are really no other better options. I just don't have time to join everything together manually and I can't help thinking there is a better way I'm missing!
Can anybody please suggest other options?
Could I also get clarification around my original mantra (functions go in modules...)?
"No, never!" or "Yes, always!" or "It's just wrong in this scenario."

Is there a way to get the system configuration files folder within a Perl script?

Tried searching for this a number of ways and have not yet found an answer ...
Background
I am working on a legacy Perl application that has a lot of hard-coded values in it which should be configurable depending on where the app is installed. So, obviously, I am looking to externalize these values into a configuration file that may be located in one of a few "expected" locations. That is, using a traditional approach of checking for the configuration file in:
the current working directory,
the user's home directory (or a sub-folder therein), and
the system configuration directory (or a sub-folder therein)
where the first one found wins.
Where I am at
Perused the CPAN site a bit and found the Config::Any package, which looks promising. I can give it a list of files to use:
use Config::Any;
my $config = Config::Any->load_files(
{
files => [qw(sample.conf /home/william/.config/sample.conf /etc/sample.conf)],
use_ext => 0,
});
This will check for the existence of each of these files, and, if found, load the contents into an array reference of hash references. Not bad, but I still have to hard-code the locations where I search for my sample.conf file. Here, I assume that I am working on a Linux system, and that the location for the configuration file for all users of the application is /etc/. I could always add /usr/local/etc/ as well, but regardless, this is not system agnostic.
I can locate the user home folder using File::HomeDir for searching there, and it works correctly regardless of the system on which the application is running. So is there a similar package that would provide the /etc/ folder (or its equivalent on other platforms)?
Questions
Is there a way to do this without having to know what particular OS I am on? (Perl package or code snippet)
What is the "Perl best practice" way of accomplishing this? I cannot imagine that no one else has run into this previously.
Unless you don't plan to run your code on non unix-based hosts, according to the conventional directory layout and filesystem hierarchy standard, you may rely on a quite large set of well known places.
Anyway, nothing prevents you to dynamically build the file search specification to take account of platform oddities and their specific ways to get them (eg. File::HomeDir::Win32 vs File::HomeDir).

Packaging a GWT app to run completely offline NOT installed via a "marketplace"

Theres a few questions similar to this, so I'll try to be clear as possible.
We have an existing, fairly large and complex, GWT webgame I have been asked to make work offline. It has to be offline in pretty much the strictest sense.
Imagine we have been told to make it work off a CD Rom.
So installation is allowed, but we cant expect the users to go to a Chrome/Firefox store and install it from there. It would need to be off the disc.
Likewise, altering of the browsers start-up flags would be unreasonable to expect of users.
Ideally, it would be nice if they just clicked a HTML file for the start page and it opened in their browsers of choice.
We successfully got it working this way in Firefox by adding;
"<add-linker name='xsiframe' />"
To our gwt.xml settings. This seems to solve any security issues FF has with local file access.
However, this does not solve the problem for Chrome.
The main game starts up, but various file requests are blocked due to security issues like these;
XMLHttpRequest cannot load file:///E:/Game%20projects/[Thorn]%20Game/ThornGame/text/messages_en.properties. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.MyApplication-0.js:34053 com_google_gwt_http_client_RequestBuilder_$doSend__Lcom_google_gwt_http_client_RequestBuilder_2Ljava_lang_String_2Lcom_google_gwt_http_client_RequestCallback_2Lcom_google_gwt_http_client_Request_2 MyApplication-0.js:34053
Now I was aware same origin policy issues might popup as during development we often tested locally using flags in chrome to bi-pass them.
Thing is...now I dont know how to get around them when we cant use startup flags.
Obviously in the example given its just the .properties file GWT uses to get some language related text. I could dump that inline in one way or another.
However, its only one of many,many,many files being blocked.
The whole game was made to run off *.txt game scripts on the sever - to allow easy updating by non-coders. Really the actual GWT code is just an "engine" and all the XMLHttpRequested files supply the actual "game".
These files are of various types; csv, txt, ntlist, jam.
The last two being custom extensions for what are really just txt files.
All these files are blocked by chromes security. It seems from what I can make out only images are allowed to be accessed locally.
Having all these files compiled in would just be impossible, as they are not fixed in number (ie, one central .txt file determains various scene .txt files which in turn determain various object files and directory's...).
Putting all this into a bundle would be nightmare to create and maintain.
So in essence I need some way to supply a offline version of a GWT project that can access a large number of various files in its subdirectories without security issues.
So far all I can think of is;
A) Theres something I can tell chrome via html or gwt that allows these files to be read in Chrome like FF can. (I suspect this isn't possible).
An alternative to XMLHttpRequest maybe?
B) I need to somehow package a game+a webbrowser in a executable package that has permission to access files in its directory's. (http://www.appcelerator.com/titanium ? ?? ).
C) I need to package and have the user run a full webserver that can then deliver all these files in a XMLHttp accessible way.
D) Bit of a funny one...we cant tell the user to add flags to browser start up...but Maybe I could write a game installer which just detects if they have Chrome or Firefox. It then opens up the games html in their browser with the correct flags for them? This would open up security issues if they browse elsewhere with that instance though, so Id presumably need other flags to disable the url bar if that's possible.
I am happy to make various changes to our code to achieve any of this - but as mentioned above theres no way to determain all the files needing to be accessed at compile time.
And finally, of course, it all has to be as easy as possible for the end user.
Ideally just clicking a html file, or installing something no more complex then a standard windows program.
Thanks for reading this rather long explanation, any pointers and ideas would be very welcome. I especially will appreciate multiple different options or feedback from anyone that's done this.
========================================
I accepted the suggestion to use Chromiumembedded below.
This works and does what I need (and much much more)
To help others that might want to use it, I specifically made two critical changes to the example project;
Because CEF needs a absolute path to the web apps local html, I wrote a c++ function to get the directory the .exe was launched from. This was a platform specific implementation, so if supporting a few OS's (which CEF does) be sure to write dedicated code for each.
Because my webapp will make use of local files, I enabled the Chrome flag for this by changing the browser settings;
browser_settings.file_access_from_file_urls = STATE_ENABLED;
These two changes were enough to get my app working, but it is obviously the bare minimum to make a application. Hopefully my finding will help others.
I'd suggest going the wrapper route. That is, provide a minimal browser implementation that opens your files directly. Options are Chromium Embedded[1]. If the nature of the application absolutely requires the files to be served as non-file urls then bundle a minimal webserver, have the on-disk executable start the server and open the bundled browser with whatever startup arguments you want.
[1] https://bitbucket.org/chromiumembedded/cef