My $LFS storage gone everytime I reboot my machine - linux-from-scratch

Somehow im still a nerd so hope you can understand my question and answer it respectfully.
I don't have any problems when doing this installation (because the "book" is tooooo far good). But, I don't have that much health and money to pay for my electricity more, so I have to shut down and reboot the machine after I've done "some of the thing".
And as expected, whenever I open up my machine, all my pre-config files and folders on the $LFS path (it's /mnt/lfs specifically use for building LFS system) suddenly gone. I mean "as expected" because I have met that situation before with building Arch, Gentoo, etc. At that time, I booted up on a live usb so I don't really think that's a problem. But now, on a Arch-installed machine, I still met this situation.
I think the main problem is because the mount point or sth kinda like that. Anyone has any ideas?
(Also can some packages like gcc-12.2.0 available for building LFS?)

Related

Have you ever experienced connection issues with Postgres database' based on just a db name?

I've been idly bashing away at an issue with Postgres for months now. I've a bit of software (custom in-house stuff) that on 24 out of 25 servers does a certain process absolutely fine, no issues what so ever.
On the 25th server though, the process wouldn't quite complete properly, it would fail at the final hurdle which was a simple date change.
It's been a back-burner type issue so I'd not committed much time to working it out until management started to get angsty so I spent most of yesterday bashing away at it.
Obvious checks were done first:
Postgres version (9.6)
Software version
Windows patches (Server 2019)
GPO's
NTFS permissions
etc
All checked out as matches across every server. Went through the Postgres and the in-house software logs at length, had one of the developers build a stand alone executable for the process with a ridiculous amount of logging, still no dice. No indicators. Procmon and Wireshark logs showed the same story, nothing clear at all as to what was going on.
So then we take a backup of the database, load it in with a different name for testing and start running the process to find that it now works fine on the cloned database. This leads us to thinking there's maybe a formatting issue of some kind in the database, conscious of the idea that doing the backup & restore would shake things around. So we go back to the live, back it up again - delete the DB from postgres and restore from the backup.
No dice. Still broken.
Cue some serious confusion. We've done essentially the same thing with cloning live to test and are still getting the same fault at the end of the process.
After some head scratching and more prodding around in the logs I hit upon an idea of doing a fresh backup of the live DB, deleting the database, restoring the backup with a different name and then pointing the live software install to the newly named live DB and testing the process again.
It works!
For clarity, the filenames are basic alpha only. Upper case and lower case. No numbers, no symbols. Less than 15 characters in length.
I'm at a loss as to why it's now working and I'd love to get some input from the community.

How to stop antivirus false positives everytime we re-release software?

Windows Defender and AVG/Avast pickup our software application as a virus/false positive everytime we release. We have a code signing certificate and add taggant as well.
Every time we release the software we have to go through the process of doing a false positive form on multiple AV vendors sites.
How can we get our company code signing cert marked as safe or avoid this time consuming false positive report process on each release?
Edit: Is there any premiere support we can pay for to have this done automatically?
Edit2: we actually had our certificate revoked due to "malware distribution" as a result of these false positives. It seems there is no recourse other than to buy another one.
Signing cert doesn't help most of the time, it's probably a coding pattern which is similar to a virus listed in them, best you can do is contacting the AV to whitelist you to get past through that.
My recommendation is to contact with the AV vendors and told them your problem. Probably your software have some strings or patters defined that potentially trigger the heuristics of the AV. You can try to find that strings easily in your base code and base64/xor/encrypt them and see what happens with the AV, that may help to solve your problem
While it is certainly possible that your software shares some characteristics with know malware, I would guess that it is a "cloud" detection.
Cutting through the marketing speak, it basically means that (among other possible caues) your file is flagged as suspicious if it has not been seen on many other PCs.
Try removing any thing that could activate antivirus flags, like self-extracting, UPX, file encryption, suspicious website requests, or suspicious behaviour.
Why to remove these?
self-extracting is triggered because it's a suspicious behaviour (not really normal to do)
UPX is detected as some malwares try to hide the malware by being compressed by UPX, as antiviruses need to decompress it.
File encryption may be easily detected as Riskware / EncoderTool / Ransomware
Suspicious websites: Evit downloading files from strange URL.
I had this problem with a program auto-update, an antivirus detected it as a TrojanDownloader.
If your program doesn't do any of these things, I can't help you more, as that is a problem that the programmer community has.
I wish that could help

Hololens store submission input not working

Building from VS to device works just fine. But building with Master settings and then it won't do. Since master is required for store submission then, downloading from store is getting the problem.
I have been trying to track the issue, and basically the input is just not working. If I perform a tap I get :
The thread 0x12a4 has exited with code 0 (0x0).
I think spatial mapping also gets a weird treatment which got me thinking it may be related to multithread (since the only correlation I can think of is the usage of multi-threading internally).
I thought it was only with one of our app first but then got told a second app totally different is also getting the same problem.
Anyone knowing what to do?
I posted on hololens forum about 3 weeks ago but no one has yet replied.
Contacting Microsoft is a basic waste of time as I will probably never get anyone with that kind of knowledge.
Any idea?
Issue seems to come from the multiple .rcs files I was keeping. Only holding to the latest one seems to allow Master build to work.
I once had this problem a long time ago, tbh I think this has something to do with the OS.
I started in a Win 10 Home pc, and I was getting the error. When I switched to another PC with Win 10 Enterprise, it was working fine. So I have been sticking with the Win 10 Enterprise pc ever since.

Log4perl problems on Ubuntu Server

I have been running a large-ish site for years, with a typical nginx / apache setup, and all the "pages" are mod_perl. Up until recently, I was running on FreeBSD. After a hardware-replacement, and due to other reasons, I was forced to migrate to Ubuntu (12.04.2 LTS), which I use on many other servers, so no big deal. However, I have a problem with my logs now.
For some reason, more and more "actions" are no longer logged through Log4Perl. This was never a problem on my previous setup, but now I seem to "lose" between 2 and 15 % of my log-entries. This is checked and verified by logging the data to a database at the same time.
Does anyone have a clue why this would happen?
Is there something I should know about large log-files and ubuntu? (It's not that large tbh, 390 MB atm)?
I get nothing in my error-logs anywhere, and as the database-logging happens AFTER the $log->info("ENTRY HERE"), the script obviously doesn't crash. But I am missing a lot of those ENTRY HEREs :)
The log in question is "hit" about once per second on average, but I should not think this would be a big problem?
Could there be "too many processes" trying to write to the log in parallel, causing locking-issues, preventing data form being appended to the file? Any typical Ubuntu-settings that might be adjusted for something like this?
Any help would be greatly appreciated.
Spinner

Migrating from Joomla 1.5.x to 1.7

I thought of migrating from J 1.5.23 to 1.7 and like almost everyone i too ran into problems (Good i backed-up my site)
The problem i am facing is that my jUpgrade gets stuck at 'Migrating undefined'. 1.7 gets downloaded completely and also extracts correctly. I think i am still facing this problem because i somehow run out of space during the installation. what i wanted to know was How much disk space does migration require?
I have like 25 Mb free on my server and i am allowed only 100 MB so.
Thank You?
and btw i also unchecked the skip downloads options, didnt work for me
You will probably need more disk space than you have available. Your current site, plus the downloaded zip file, plus space for extracting the files plus any backups you have on the server are likely to exceed your 100MB.
I'd recommend taking a backup of your site, setting up the site on a localhost (xampp, wamp, etc) server on your own machine and run the migration there. This will have the benefits of not hitting arbitrary limits of what sounds like a very low budget web host.
Obviously you'll have the extra complexity of setting up your own server on your PC - but there are many tutorials out there that will walk you through the process, and the learning of new skills is always good.