How to copy the incremental parts of the cloned dataset to the original dataset(snapshot) in zfs? - copy

zp1/tmp/origin#1 ==(clone & snapshot)==> zp1/tmp/clone#1
...working...
(snapshot)
||
\/
zp1/tmp/origin#2 <==================== zp1/tmp/clone#2
||
$$copy the incremental parts between zp1/tmp/clone#1
and zp1/tmp/clone#2 to zp1/tmp/origin.$$
The $$copy ..$$ part is what I want, and I have tried the below test procedure but failed with does not match incremental source error. Please, note that it's not about backup.
Is it possible?
[test procedure]
# zfs create zp1/tmp/origin
# touch /zp1/tmp/origin/hi.txt
# zfs snapshot zp1/tmp/origin#1
# zfs clone zp1/tmp/origin#1 zp1/tmp/clone
# zfs snapshot zp1/tmp/clone#1
# touch /zp1/tmp/clone/bye.txt
# zfs snapshot zp1/tmp/clone#2
# zfs list -t all -r zp1/tmp
NAME USED AVAIL REFER MOUNTPOINT
zp1/tmp 256K 339G 96K /zp1/tmp
zp1/tmp/clone 64K 339G 96K /zp1/tmp/clone
zp1/tmp/clone#1 0B - 96K -
zp1/tmp/clone#2 0B - 96K -
zp1/tmp/origin 96K 339G 96K /zp1/tmp/origin
zp1/tmp/origin#1 0B - 96K -
# zfs send -v -I zp1/tmp/clone#1 zp1/tmp/clone#2 | zfs receive -v zp1/tmp/origin#2
send from #1 to zp1/tmp/clone#2 estimated size is 32.6K
total estimated size is 32.6K
TIME SENT SNAPSHOT zp1/tmp/clone#2
receiving incremental stream of zp1/tmp/clone#2 into zp1/tmp/origin#2
cannot receive incremental stream: most recent snapshot of zp1/tmp/origin does not
match incremental source

The zfs send command compares a UUID of the source that it’s sending from and the destination that it’s sending to, to make sure you’re replaying the changes on top of a filesystem that had exactly the same data in it. In your case you’re skipping part of the timeline though, so this UUID doesn’t match.
Your naming scheme for the snapshots on the clone is confusing — clone#1 is not the same snapshot as original-fs#1 even though they probably point at the same data, so I’m going to rename them slightly to make that clearer:
original-fs#1 comes first
clone comes next and has the same UUID as original-fs#1 at the start
clone#2 comes next
clone#3 comes next
You’re trying to send the delta between clone#2 and clone#3 onto a filesystem which doesn’t have clone#2 on it yet. Instead, you should send from original-fs#1 to clone#3 to capture the whole timeline (or you could do two sends, from original-fs#1 to clone#2, then clone#2 to clone#3, if you want to recreate the full snapshot sequence on both versions of the data).
That said, this is just copying a bunch of data around for no reason. Why not just zfs promote the clone so that it becomes the parent filesystem? (Then you can delete the old parent and rename the new one to take its place.)

Related

Does osquery inotify install watcher on directory or files

I am using osquery to monitor files and folders to get events on any operation on those files. There is a specific syntax for osquery configuration:
"/etc/": watches the entire directory at a depth of 1.
"/etc/%": watches the entire directory at a depth of 1.
"/etc/%%": watches the entire tree recursively with /etc/ as the root.
I am trying to evaluate the memory usage in case of watching a lot of directories. In this process I found the following statistics:
"/etc", "/etc/%", "/etc/%.conf": only 1 inotify handle is found registered in the name of osquery.
"/etc/%%: a few more than 289 inotify handles found which are registered in the name of osquery, given that there are a total of 285 directories under the tree. When checking the entries in /proc/$PID/fdinfo, all the inodes listed in the file points to just folders.
eg: for "/etc/%.conf"
$ grep -r "^inotify" /proc/$PID/fdinfo/
18:inotify wd:1 ino:120001 sdev:800001 mask:3ce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:01001200bc0f1cab
$ printf "%d\n" 0x120001
1179649
$ sudo debugfs -R "ncheck 1179649" /dev/sda1
debugfs 1.43.4 (31-Jan-2017)
Inode Pathname
1179649 //etc
The inotify watch is established on the whole directory here, but the events are only reported for the matching files /etc/*.conf. Is osquery filtering the events based on the file_paths supplied, which is what I am assuming, but not sure.
Another experiment that I performed to support the above claim was, use the source in the inotify(7) and run a watcher on a particular file. When I check the list of inotify watchers, it just shows :
$ ./a.out /tmp/inotify.cc &
$ cat /proc/$PID/fdinfo/3
...
inotify wd:1 ino:1a1 sdev:800001 mask:38 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:a1010000aae325d7
$ sudo debugfs -R "ncheck 417" /dev/sda1
debugfs 1.43.4 (31-Jan-2017)
Inode Pathname
417 /tmp/inotify.cc
So, according to this experiment, establishing a watcher on a single file is possible (which is clear from the inotify man page). This supports the claim that osquery is doing some sort of filtering based on the file patterns supplied.
Could someone verify the claim or present otherwise?
My osquery config:
{
"options": {
"host_identifier": "hostname",
"schedule_splay_percent": 10
},
"schedule": {
"file_events": {
"query": "SELECT * FROM file_events;",
"interval": 5
}
},
"file_paths": {
"sys": ["/etc/%.conf"]
}
}
$ osqueryd --version
osqueryd version 3.3.2
$ uname -a
Linux lab 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
It sounds like some great sleuthing!
I think the comments in the source code support that. It's worth skimming it. I think the relevant files:
https://github.com/osquery/osquery/blob/master/osquery/tables/events/linux/file_events.cpp
https://github.com/osquery/osquery/blob/master/osquery/events/linux/inotify.cpp

Yocto - Create and populate a separate /home partition

I'm creating quite a simple Yocto image based on x86.
I want the / file system to be readonly, so I set the
IMAGE_FEATURES_append = " read-only-rootfs "
in a custom copy of the original core-image-minimal.bb. I do want to have the /home writable and on a separate partition, though.
So, I'm adding a line
part /home --ondisk sda --fstype=ext4 --label home --align 1024 --size 600
in genericx86.wks. This creates the actual /home partition in the final wic image, but it naturally does not hold any data, as there's no corresponding rootfs for it. This leads to the following quite expected message after boot: No directory, logging in with HOME=/.
There's surprisingly little info about this on the internet. There's this explanation:
It's much more simpler to create or modify build
recipes to prepare one rootfs directory per partition.
I just wish there was any reference in the documentation or example on how to achieve that.
I can see that the partitions are being populated by python scripts (plugins) like rootfs.py, and that the image parameters like IMAGE_ROOTFS_SIZE are specified in mentioned image recipe files like the genericx86.wks, but this is just not enough for me to connect these pieces together.
I've read the creating-partitioned-images-using-wic and the linked openembedded kickstart manuals, there are no clues there.
Appreciate someone's kind help.
With WIC you can do something like this:
custom.wks.in:
...
part / --source rootfs --ondisk sda --fstype=ext4 --label system --exclude-path=home/
part /home --source rootfs --rootfs-dir=${IMAGE_ROOTFS}/home --ondisk sda --fstype=ext4 --label home
...
Note it is important if you want to use ${IMAGE_ROOTFS} in WKS file to name it with .in suffix.

What corruption is indicated by WinDbg and !chkimg?

I am having often BSODs and WinDbg report similar corruption for most of them
4: kd> !chkimg -lo 50 -d !nt
fffff80177723e6d-fffff80177723e6e 2 bytes - nt!MiPurgeZeroList+6d
[ 80 fa:00 e9 ]
2 errors : !nt (fffff80177723e6d-fffff80177723e6e)
and
CHKIMG_EXTENSION: !chkimg -lo 50 -d !nt
fffff8021531ae6d-fffff8021531ae6e 2 bytes - nt!MiPurgeZeroList+6d
[ 80 fa:00 aa ]
2 errors : !nt (fffff8021531ae6d-fffff8021531ae6e)
What does it mean? What with what is compared and how it can be that corruption is similar? Does it explicitly indicates RAM problem?
UPDATE
What do these numbers mean? fffff80177723e6d and fffff8021531ae6d? What does it mean, that endings conincide?
What does the following code mean: nt!MiPurgeZeroList+6d?
I already answered this on superuser.com. Windbg downloads the original Exe/DLLs from the Symbol Server and now the chkimg command detects corruption in the images of executable files by comparing them to the copy on a symbol store.
All sections of the file are compared, except for sections that are
discardable, that are writeable, that are not executable, that have
"PAGE" in their name, or that are from INITKDBG. You can change this
behavior can by using the -ss, -as, or -r switches.
!chkimg displays any mismatch between the image and the file as an
image error, with the following exceptions:
Addresses that are occupied by the Import Address Table (IAT) are not checked.
Certain specific addresses in Hal.dll and Ntoskrnl.exe are not checked, because certain changes occur when these sections are loaded.
To check these addresses, include the -nospec option.
If the byte value 0x90 is present in the file, and if the value 0xF0 is present in the corresponding byte of the image (or vice
versa), this situation is considered a match. Typically, the symbol
server holds one version of a binary that exists in both uniprocessor
and multiprocessor versions. On an x86-based processor, the lock
instruction is 0xF0, and this instruction corresponds to a nop (0x90)
instruction in the uniprocessor version. If you want !chkimg to
display this pair as a mismatch, set the -noplock option.
If the RAM is fine, check the HDD / HDD cables for errors (disk diag tool and run chkdsk to detect and fix NTFS issues). You can also connect the HDD to different SATA port on the mainboard.

Can you get the number of lines of code from a GitHub repository?

In a GitHub repository you can see “language statistics”, which displays the percentage of the project that’s written in a language. It doesn’t, however, display how many lines of code the project consists of. Often, I want to quickly get an impression of the scale and complexity of a project, and the count of lines of code can give a good first impression. 500 lines of code implies a relatively simple project, 100,000 lines of code implies a very large/complicated project.
So, is it possible to get the lines of code written in the various languages from a GitHub repository, preferably without cloning it?
The question “Count number of lines in a git repository” asks how to count the lines of code in a local Git repository, but:
You have to clone the project, which could be massive. Cloning a project like Wine, for example, takes ages.
You would count lines in files that wouldn’t necessarily be code, like i13n files.
If you count just (for example) Ruby files, you’d potentially miss massive amount of code in other languages, like JavaScript. You’d have to know beforehand which languages the project uses. You’d also have to repeat the count for every language the project uses.
All in all, this is potentially far too time-intensive for “quickly checking the scale of a project”.
You can run something like
git ls-files | xargs wc -l
Which will give you the total count →
You can also add more instructions. Like just looking at the JavaScript files.
git ls-files | grep '\.js' | xargs wc -l
Or use this handy little tool → https://line-count.herokuapp.com/
A shell script, cloc-git
You can use this shell script to count the number of lines in a remote Git repository with one command:
#!/usr/bin/env bash
git clone --depth 1 "$1" temp-linecount-repo &&
printf "('temp-linecount-repo' will be deleted automatically)\n\n\n" &&
cloc temp-linecount-repo &&
rm -rf temp-linecount-repo
Installation
This script requires CLOC (“Count Lines of Code”) to be installed. cloc can probably be installed with your package manager – for example, brew install cloc with Homebrew. There is also a docker image published under mribeiro/cloc.
You can install the script by saving its code to a file cloc-git, running chmod +x cloc-git, and then moving the file to a folder in your $PATH such as /usr/local/bin.
Usage
The script takes one argument, which is any URL that git clone will accept. Examples are https://github.com/evalEmpire/perl5i.git (HTTPS) or git#github.com:evalEmpire/perl5i.git (SSH). You can get this URL from any GitHub project page by clicking “Clone or download”.
Example output:
$ cloc-git https://github.com/evalEmpire/perl5i.git
Cloning into 'temp-linecount-repo'...
remote: Counting objects: 200, done.
remote: Compressing objects: 100% (182/182), done.
remote: Total 200 (delta 13), reused 158 (delta 9), pack-reused 0
Receiving objects: 100% (200/200), 296.52 KiB | 110.00 KiB/s, done.
Resolving deltas: 100% (13/13), done.
Checking connectivity... done.
('temp-linecount-repo' will be deleted automatically)
171 text files.
166 unique files.
17 files ignored.
http://cloc.sourceforge.net v 1.62 T=1.13 s (134.1 files/s, 9764.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Perl 149 2795 1425 6382
JSON 1 0 0 270
YAML 2 0 0 198
-------------------------------------------------------------------------------
SUM: 152 2795 1425 6850
-------------------------------------------------------------------------------
Alternatives
Run the commands manually
If you don’t want to bother saving and installing the shell script, you can run the commands manually. An example:
$ git clone --depth 1 https://github.com/evalEmpire/perl5i.git
$ cloc perl5i
$ rm -rf perl5i
Linguist
If you want the results to match GitHub’s language percentages exactly, you can try installing Linguist instead of CLOC. According to its README, you need to gem install linguist and then run linguist. I couldn’t get it to work (issue #2223).
I created an extension for Google Chrome browser - GLOC which works for public and private repos.
Counts the number of lines of code of a project from:
project detail page
user's repositories
organization page
search results page
trending page
explore page
If you go to the graphs/contributors page, you can see a list of all the contributors to the repo and how many lines they've added and removed.
Unless I'm missing something, subtracting the aggregate number of lines deleted from the aggregate number of lines added among all contributors should yield the total number of lines of code in the repo. (EDIT: it turns out I was missing something after all. Take a look at orbitbot's comment for details.)
UPDATE:
This data is also available in GitHub's API. So I wrote a quick script to fetch the data and do the calculation:
'use strict';
async function countGithub(repo) {
const response = await fetch(`https://api.github.com/repos/${repo}/stats/contributors`)
const contributors = await response.json();
const lineCounts = contributors.map(contributor => (
contributor.weeks.reduce((lineCount, week) => lineCount + week.a - week.d, 0)
));
const lines = lineCounts.reduce((lineTotal, lineCount) => lineTotal + lineCount);
window.alert(lines);
}
countGithub('jquery/jquery'); // or count anything you like
Just paste it in a Chrome DevTools snippet, change the repo and click run.
Disclaimer (thanks to lovasoa):
Take the results of this method with a grain of salt, because for some repos (sorich87/bootstrap-tour) it results in negative values, which might indicate there's something wrong with the data returned from GitHub's API.
UPDATE:
Looks like this method to calculate total line numbers isn't entirely reliable. Take a look at orbitbot's comment for details.
You can clone just the latest commit using git clone --depth 1 <url> and then perform your own analysis using Linguist, the same software Github uses. That's the only way I know you're going to get lines of code.
Another option is to use the API to list the languages the project uses. It doesn't give them in lines but in bytes. For example...
$ curl https://api.github.com/repos/evalEmpire/perl5i/languages
{
"Perl": 274835
}
Though take that with a grain of salt, that project includes YAML and JSON which the web site acknowledges but the API does not.
Finally, you can use code search to ask which files match a given language. This example asks which files in perl5i are Perl. https://api.github.com/search/code?q=language:perl+repo:evalEmpire/perl5i. It will not give you lines, and you have to ask for the file size separately using the returned url for each file.
Not currently possible on Github.com or their API-s
I have talked to customer support and confirmed that this can not be done on github.com. They have passed the suggestion along to the Github team though, so hopefully it will be possible in the future. If so, I'll be sure to edit this answer.
Meanwhile, Rory O'Kane's answer is a brilliant alternative based on cloc and a shallow repo clone.
From the #Tgr's comment, there is an online tool :
https://codetabs.com/count-loc/count-loc-online.html
You can use tokei:
cargo install tokei
git clone --depth 1 https://github.com/XAMPPRocky/tokei
tokei tokei/
Output:
===============================================================================
Language Files Lines Code Comments Blanks
===============================================================================
BASH 4 48 30 10 8
JSON 1 1430 1430 0 0
Shell 1 49 38 1 10
TOML 2 78 65 4 9
-------------------------------------------------------------------------------
Markdown 4 1410 0 1121 289
|- JSON 1 41 41 0 0
|- Rust 1 47 38 5 4
|- Shell 1 19 16 0 3
(Total) 1517 95 1126 296
-------------------------------------------------------------------------------
Rust 19 3750 3123 119 508
|- Markdown 12 358 5 302 51
(Total) 4108 3128 421 559
===============================================================================
Total 31 6765 4686 1255 824
===============================================================================
Tokei has support for badges:
Count Lines
[![](https://tokei.rs/b1/github/XAMPPRocky/tokei)](https://github.com/XAMPPRocky/tokei)
By default the badge will show the repo's LoC(Lines of Code), you can also specify for it to show a different category, by using the ?category= query string. It can be either code, blanks, files, lines, comments.
Count Files
[![](https://tokei.rs/b1/github/XAMPPRocky/tokei?category=files)](https://github.com/XAMPPRocky/tokei)
You can use GitHub API to get the sloc like the following function
function getSloc(repo, tries) {
//repo is the repo's path
if (!repo) {
return Promise.reject(new Error("No repo provided"));
}
//GitHub's API may return an empty object the first time it is accessed
//We can try several times then stop
if (tries === 0) {
return Promise.reject(new Error("Too many tries"));
}
let url = "https://api.github.com/repos" + repo + "/stats/code_frequency";
return fetch(url)
.then(x => x.json())
.then(x => x.reduce((total, changes) => total + changes[1] + changes[2], 0))
.catch(err => getSloc(repo, tries - 1));
}
Personally I made an chrome extension which shows the number of SLOC on both github project list and project detail page. You can also set your personal access token to access private repositories and bypass the api rate limit.
You can download from here https://chrome.google.com/webstore/detail/github-sloc/fkjjjamhihnjmihibcmdnianbcbccpnn
Source code is available here https://github.com/martianyi/github-sloc
Hey all this is ridiculously easy...
Create a new branch from your first commit
When you want to find out your stats, create a new PR from main
The PR will show you the number of changed lines - as you're doing a PR from the first commit all your code will be counted as new lines
And the added benefit is that if you don't approve the PR and just leave it in place, the stats (No of commits, files changed and total lines of code) will simply keep up-to-date as you merge changes into main. :) Enjoy.
Firefox add-on Github SLOC
I wrote a small firefox addon that prints the number of lines of code on github project pages: Github SLOC
npm install sloc -g
git clone --depth 1 https://github.com/vuejs/vue/
sloc ".\vue\src" --format cli-table
rm -rf ".\vue\"
Instructions and Explanation
Install sloc from npm, a command line tool (Node.js needs to be installed).
npm install sloc -g
Clone shallow repository (faster download than full clone).
git clone --depth 1 https://github.com/facebook/react/
Run sloc and specifiy the path that should be analyzed.
sloc ".\react\src" --format cli-table
sloc supports formatting the output as a cli-table, as json or csv. Regular expressions can be used to exclude files and folders (Further information on npm).
Delete repository folder (optional)
Powershell: rm -r -force ".\react\" or on Mac/Unix: rm -rf ".\react\"
Screenshots of the executed steps (cli-table):
sloc output (no arguments):
It is also possible to get details for every file with the --details option:
sloc ".\react\src" --format cli-table --details
Open terminal and run the following:
curl -L "https://api.codetabs.com/v1/loc?github=username/reponame"
If the question is "can you quickly get NUMBER OF LINES of a github repo", the answer is no as stated by the other answers.
However, if the question is "can you quickly check the SCALE of a project", I usually gauge a project by looking at its size. Of course the size will include deltas from all active commits, but it is a good metric as the order of magnitude is quite close.
E.g.
How big is the "docker" project?
In your browser, enter api.github.com/repos/ORG_NAME/PROJECT_NAME
i.e. api.github.com/repos/docker/docker
In the response hash, you can find the size attribute:
{
...
size: 161432,
...
}
This should give you an idea of the relative scale of the project. The number seems to be in KB, but when I checked it on my computer it's actually smaller, even though the order of magnitude is consistent. (161432KB = 161MB, du -s -h docker = 65MB)
Pipe the output from the number of lines in each file to sort to organize files by line count.
git ls-files | xargs wc -l |sort -n
This is so easy if you are using Vscode and you clone the project first. Just install the Lines of Code (LOC) Vscode extension and then run LineCount: Count Workspace Files from the Command Pallete.
The extension shows summary statistics by file type and it also outputs result files with detailed information by each folder.
There in another online tool that counts lines of code for public and private repos without having to clone/download them - https://klock.herokuapp.com/
None of the answers here satisfied my requirements. I only wanted to use existing utilities. The following script will use basic utilities:
Git
GNU or BSD awk
GNU or BSD sed
Bash
Get total lines added to a repository (subtracts lines deleted from lines added).
#!/bin/bash
git diff --shortstat 4b825dc642cb6eb9a060e54bf8d69288fbee4904 HEAD | \
sed 's/[^0-9,]*//g' | \
awk -F, '!($2 > 0) {$2="0"};!($3 > 0) {$3="0"}; {print $2-$3}'
Get lines of code filtered by specified file types of known source code (e.g. *.py files or add more extensions, etc).
#!/bin/bash
git diff --shortstat 4b825dc642cb6eb9a060e54bf8d69288fbee4904 HEAD -- *.{py,java,js} | \
sed 's/[^0-9,]*//g' | \
awk -F, '!($2 > 0) {$2="0"};!($3 > 0) {$3="0"}; {print $2-$3}'
4b825dc642cb6eb9a060e54bf8d69288fbee4904 is the id of the "empty tree" in Git and it's always available in every repository.
Sources:
My own scripting
How to get Git diff of the first commit?
Is there a way of having git show lines added, lines changed and lines removed?
shields.io has a badge that can count up all the lines for you here. Here is an example of what it looks like counting the Raycast extensions repo:
You can use sourcegraph, an open source search engine for code. It can connect to your GitHub account, index the content, and then on the admin section you would see the number of lines of code indexed.
I made an NPM package specifically for this usage, which allows you to call a CLI tool and providing the directory path and the folders/files to ignore
it goes like this:
npm i -g #quasimodo147/countlines
to get the $ countlines command in your terminal
then you can do
countlines . node_modules build dist

How to load multiple osm files into Nominatim

I need to figure out the process to load multiple OSM files into a Nominatim database. I have everything setup and can load a single file with no issues.
Basically what I'm trying to do is load some of the GeoFabrik OSM files for only a part of the world. So I'm grabbing like the North America and South America OSM files. Or any 2 on their site.
For the first load I use the setup.php:
./utils/setup.php --osm-file file.osm --all --osm2pgsql-cache 4000
I'm not sure if I have another file (file2.osm) how to load this into the database and keep the original data.
Basically, I just want pieces of the world and I only need to load data like every six months or so. I don't need daily updates/ etc...
I need to split the files up because it just takes too long to load and I want to manage it better.
Can I use the update.php..... But not sure what parameters.
I thought about loading all data with update and the no-index clause...Then maybe building the index??
I did try to re-run the setup.php for the second file but it just hung for a long time
For second file
./utils/setup.php --import-data --osm-file file2.osm --osm2pgsql-cache 4000
But this just hangs on Setting up table: planet_osm_ways. (I tested very small OSM files that should finish within minutes but it just hangs).
The files that I'm using are all non-intersecting so not truly updates. SO I have a North America and a South America...How do I load both into Nominatim separately.
Thanks
The answer can be found at help.openstreetmap.org.
First you need to import it via the update script: ./utils/update.php --import-file <yourfile>. Then you need to trigger a re-indexing of the data: ./utils/update.php --index
But according to lonvia (one of the Nominatim developers) this will be very slow and it is better if you merge all your files first and then import it as one large file.
Sample Merging Code, merging Andorra, Malta and Liechtenstein,
curl -L 'http://download.geofabrik.de/europe/andorra-latest.osm.pbf' --create-dirs -o /srv/nominatim/src/andorra.osm.pbf
curl -L 'http://download.geofabrik.de/europe/malta-latest.osm.pbf' --create-dirs -o /srv/nominatim/src/malta.osm.pbf
curl -L 'http://download.geofabrik.de/europe/liechtenstein-latest.osm.pbf' --create-dirs -o /srv/nominatim/src/liechtenstein.osm.pbf
osmconvert /srv/nominatim/src/andorra.osm.pbf -o=/srv/nominatim/src/andorra.o5m
osmconvert /srv/nominatim/src/malta.osm.pbf -o=/srv/nominatim/src/malta.o5m
osmconvert /srv/nominatim/src/liechtenstein.osm.pbf -o=/srv/nominatim/src/liechtenstein.o5m
osmconvert /srv/nominatim/src/andorra.o5m /srv/nominatim/src/malta.o5m /srv/nominatim/src/liechtenstein.o5m -o=/srv/nominatim/src/data.o5m
osmconvert /srv/nominatim/src/data.o5m -o=/srv/nominatim/src/data.osm.pbf;
More about OsmConvert -> https://wiki.openstreetmap.org/wiki/Osmconvert
Once merged, you can,
sudo -u nominatim /srv/Nominatim/build/utils/setup.php \
--osm-file /srv/nominatim/src/data.osm.pbf \
--all \
--threads ${BUILD_THREADS} \ # 16 Threads?
--osm2pgsql-cache ${OSM2PGSQL_CACHE} # 24000 ?