Efficiently make 100 copies of a Micro SD card - sd-card

I have a 16GB Micro SD card that I would like to make a large number (say, 100) of copies of. I can use the dd command to back up the volume, and I can use dd again to copy it onto a new Micro SD. However, doing this is extremely slow (estimates to follow).
My question is: how can I most efficiently make these 100 copies? Ideally I could keep the total time down to within a few days, and the human-involved time (e.g. swapping cards and running commands) minimal, without requiring many computers or great expense.
[The reason I ran into this problem is that I build a product using Raspberry Pi 400s (the product is Go Note Go; you can learn about it here: https://davidbieber.com/snippets/2023-01-16-go-note-go-features/). They boot from MicroSD. I want to put my software on a MicroSD so I can set up new devices with minimal configuration.]
My default overall strategy is to first make a backup from the SD card I want to copy, and then to one-at-a-time (since I only have one laptop to make the copies from) restore that backup onto the other 100 cards.
Here's an example of the backup command I'm using:
sudo dd if=/dev/disk3 conv=sparse,noerror of=~/Desktop/sdcard.img bs=1m
I'm using a USB Type-C/OTG card reader/writer on a Macbook Pro.
After this command runs for a few minutes, ctrl-t (to check the status of the transfer) reports:
1211+0 records in
1211+0 records out
1269825536 bytes transferred in 216.104367 secs (5875983 bytes/sec)
This suggests that the complete backup will take 45 minutes. Indeed, after 45 minutes the backup is complete. (Update: using /dev/rdisk3 instead of /dev/disk3 reduces this time to 11m39s.)
If I raise bs to bs=64m, this estimate rises to 90 minutes.
Though the backup time is not critical to the question (the 100 restores dominate the total time), I would love to have faster backups too.
Here's an example of the restore command I'm using:
sudo dd if=~/Desktop/sdcard.img of=/dev/rdisk3 bs=1m
(I discovered "rdisk" while writing this question, or else I would have considered it for backup too.)
ctrl-t reports:
load: 5.62 cmd: dd 75163 uninterruptible 0.00u 0.43s
952+0 records in
952+0 records out
998244352 bytes transferred in 48.697956 secs (20498691 bytes/sec)
suggesting the restore will take 13 minutes. (Update: Yes, it took 13 minutes.)
So 100 restores will take 22 hours, and will require a human to swap out the card every 13 minutes.
Can you suggest a less time-consuming or less arduous approach? (22 hours is fine, but the human-involvement every 13 minutes is rather inconvenient.)
Here's my current thinking:
Switching from /dev/disk3 to /dev/rdisk3 may have brought the total transfer down considerably (not 100% sure about this, as I haven't tried the same operations both ways)
I see for $800 there is https://www.ureach-estore.com/products/1-to-7-sd-duplicator-silver-series which seems like it can make several copies of an SD card at once at high speeds, but is costly and so I'd like to explore other options first
Maybe I can find a cheaper multi SD card reader/writer -- I haven't found one yet. Or, maybe I can use a USB splitter and multiple ordinary Card reader/writers. How much will writing to multiple cards at once degrade the write speed? How many disks can Mac support at once? I don't know the answers to these qs yet.

Related

How do I have Mongo 3.0 / WiredTiger load my whole database into RAM?

I have a static database (that will never even receive a write) of around 5 GB, while my server RAM is 30 GB. I'm focusing on returning complicated aggregations to the user as fast as possible, so I don't see a reason why I shouldn't have (a) the indexes and (b) the entire dataset stored entirely in RAM, and (c) automatically stored there whenever the Mongo server boots up. Currently my main bottleneck is running group commands to find unique elements out of millions of rows.
My question is, how can I do either (a), (b), or (c) while running on the new Mongo/WiredTiger? I know the "touch" command doesn't work with WiredTiger, so most information on the Internet seems out of date. Are (a), (b), or (c) already done automatically? Should I not be doing each of these steps with this use case?
Normaly you shouldn't have to do anything. The disk pages are loaded in RAM upon request and stay there. If there is no more free memory the older (unused) pages get unloaded to be used by other programs that need them.
If you must have your whole db in ram you could use a ramdisk and tell mongo to use it as a storage device.
I would recommend that you revise your indices and/or data structures. Having the correct ones can make a huge difference in performance. We are talking about seconds vs hours.

Postgres running slow after indexing finished

My postgres was running really slow lately, an aggregation for a month it usually ended up taking more than 1 minute (to be more exact the last one took 7 mins and 23 secs).
Last friday i recreated the servers (master and replica) and reimported the database.
First thing I noticed is that from 133gb now the database is 42gb (the actual data is around 12gb, i guess the rest are the indexes).
Everything was fast as hell for a day, after that the indexing finished (26gb on indexes) and now I'm back to square 1.
A count on ~5 million rows takes 3 mins 42 secs.
Made the autovacuum more aggressive and it looks like it's doing it's job now but the DB is still slow.
I am using the db for an API so it's constantly growing. Atm i have 2 tables one that has around 5 mil rows and the other 28 mil.
So if the master has a lot of activity and let's say that i'm expecting some performance loss, i don't expect it from the replica.
What's curios is that after a restart it's really fast for an hour or so.
Also another thing that i noticed was that on every query I do the IO is 100% while the memory and cpu are almost not used at all.
Any help would be greatly appreciated.
Update
Same database on a smaller machine works like a charm.
Same queries, same indexes.
The only difference is the traffic, not writing or updating that much.
Also i forgot to mention one thing, one of my indexes is clustered.
The live machine is a 5 core with 64gb and 3k IO.
The test machine is a 2 core with 4gb and an SSD.
Update
Found my issue.
Apparently the autovacuum can't get a lock and by the time it gets it the dead tuples increased.
Made the autovacuum more aggresive for now and deleted a bunch of unused indexes.
Still don't know how to fix the lock issue tho.
Update
Looks like something is increasing the estimated row count.
Since my last update here the row count increased by 2 mil.
I guess that by tomorrow the row count will be again around 12 mil and the count will be slow as hell again.
Could this be related to autovacuum?
Update
Well found my issue.
Looks like postgres is losing a lot of speed on a write intensive database.
Had a column that was used as a flag and updated a lot of times per day.
Everything looks really good after the flag and update was removed.
Any clue on how to fix this issue on a write intensive table?
May be the following pointers help:
Are you really sure you want to do a 5mil row Aggregation for an API? Everytime ? Can't you split the data into chunks such that only a small number of chunks actually get most of the new rows (and so the aggregation of all the previous chunks can be reused for the next Query)? Time is one such measure, serial numbers could be another, etc. If so, partitioning the data is an obvious solution you should investigate, it really has a good chance of giving you sub-second query times (assuming you store aggregations for previous chunks smartly).
A hunch about that first hour magic is that although this data fits RAM, concurrent querying pushes that data-set out and then its purely disk I/O... and in that case, CPU / RAM being idle isn't a surprise.
Finally, I think this setup is asking for a re-design where there is only so much you could do with a single SQL, and in that expecting sub-second Query times for data that is not within RAM for a 5mil data-set is probably being too optimistic!
(Nonetheless, do post your findings, if possible)

Progress Database Performance issue?

We have recently upgraded to OE 11.3 version.The application and database appears to be slow in one particular location.But we didn't face any performance issues in the application or in the databases.i have checked few parameters in promon like buffer hits,Number of databse buffers,-spin parameter.
Buffer hits -97%
number of databse buffers - 50000
-spin berfore timeout- 2000 which looks very low.
is there any way we can find the issue why the database and application is very slow in only that location?
we are not facing any performance issue form other locations.
does increasing the -spin value would increase the performance in that location?
Location refers to geographical location.
You are not providing very much information:
A) About your intentions. Do you just want everything to be "faster"? Or are there other needs - like servers are out of memory/under heavy load etc.
B) About your system. How many users, databases, tables, indices etc etc.
C) When you say location - what do you really mean? Is it a specific program, a specific query/search or a specific (geographical) location?
Buffer hits
97% buffer doesn't say that much on its own:
Are there 1 000 record lookups or 1 000 000 000?
"Primary Buffer hits" says nothing about single tables. Perhaps all "buffer misses" comes from a single table (or very few).
A simple explanation of buffer hits:
A record read in the buffer (memory) is a "hit" a record read from the disk is not.
1 000 record lookups with 97% buffer hits means:
970 records are read from buffer (memory). (0.97 x 1 000)
30 records are read from disk. (0.03 x 1 000)
Increasing to 99% buffer hits means you will remove:
20 disc reads. (0.02 x 1 000)
1 000 000 000 record lookups with 97% buffer hits means:
970 000 000 records are read from buffer (memory).
30 000 000 are read from disk.
Increasing to 99% buffer hits means you will remove:
20 000 000 disc reads.
In the first case you won't notice anything at all most likely when going from 97 to 99%. In the second case load on discs will decrease a lot.
Conclusion
Increasing -B might affect your performance as well as buffer hits. Changing -spin might also affect your performance by utilizing more of your CPU. It all depends on how your system works. The best way really is to try (with a test setup).
The first thing you really should do is to look at your application and the most run queries - do the utilize optimal indices? If not you can most likely tune very much without getting big differences. Read up on index usage, XREF-compiling and Different VST-tables you can use to check index performance etcetera.
This is a good place to start:
Top 10 (really more) Performance Tuning Tips For The Progress Database
Also, you can try the excellent free ProTop software for and get some guesstimations for -B:
ProTop
This question is very vague. You would be much better off asking it in a forum where some "back and forth" can take place and where you can be guided to a more complete answer.
You might try:
http://progresstalk.com
https://community.progress.com
http://peg.com
These forums all have dedicated DBA focused areas where many people routinely chip in to help.
We have found that adding (on linux servers) -T /dev/shm made a big performance improvement
/oe116> cat startup.pf
-T /dev/shm
You can also add this to your common.pf files
You can see the before and after of this by doing a ( with the database running)
lsof |grep delete
and you should see a lot of locations on your hard disk then after you add it and restart your database` it will be using shared memory

MongoDB Insert performance - Huge table with a couple of Indexes

I am testing Mongo DB to be used in a database with a huge table of about 30 billion records of about 200 bytes each. I understand that Sharding is needed for that kind of volume, so I am trying to get 1 to 2 billion records on one machine. I have reached 1 billion records on a machine with 2 CPU's / 6 cores each, and 64 GB of RAM. I mongoimport-ed without indexes, and speed was okay (average 14k records/s). I added indexes, which took a very long time, but that is okay as it is a one time thing. Now inserting new records into the database is taking a very long time. As far as I can tell, the machine is not loaded while inserting records (CPU, RAM, and I/O are in good shape). How is it possible to speed -up inserting new records?
I would recommend adding this host to MMS (http://mms.10gen.com/help/overview.html#installation) - make sure you install with munin-node support and that will give you the most information. This will allow you to track what might be slowing you down. Sorry I can't be more specific in the answer, but there are many, many possible explanations here. Some general points:
Adding indexes means that that the indexes as well as your working data set will be in RAM now, this may have strained your resources (look for page faults)
Now that you have indexes, they must be updated when you are inserting - if everything fits in RAM this should be OK, see first point
You should also check your Disk IO to see how that is performing - how does your background flush average look?
Are you running the correct filesystem (XFS, ext4) and a kernel version later than 2.6.25? (earlier versions have issues with fallocate())
Some good general information for follow up can be found here:
http://www.mongodb.org/display/DOCS/Production+Notes

Configuration Recommendations for a PostgreSQL Installation

I have a Windows Server 2003 machine which I will be using as a Postgres database server, the machine is a Dual Core 3.0Ghz Xeon with 4 GB ECC Memory and 4 x 120GB 10K RPM SAS Drives, all stripped.
I have read that the default Postgres install is configured to run nicely on a 486 with 32MB RAM, and I have read several web pages about configuration optimizations - but was hoping for something more concrete from my Stackoverflow peeps.
Generally, its only going to serve 1 database (potentially one or two more) but the catch is that the database has 1 table in particular which is massive (hundreds of millions of records with only a few coloumn). Presently, with the default configuration, it's not slow, but I think it could potentially be even faster.
Can people please give me some guidance and recomendations for configuration settings which you would use for a server such as this.
4*stripped drive was a bad idea — if any of this drives will fail then you'll loose all data, and even SAS drives sometimes fail — with 4 drivers it is 4 times more likely than with 1 drive; you should go for RAID 1+0.
Use the latest version of Postgres, 8.3.7 now; there are many performance improvements in every major version.
Set shared_buffers parameter in postgresql.conf to about 1/4 of your memory.
Set effective_cache_size to about 1/2 of your memory.
Set checkpoint_segments to about 32 (checkpoint every 512MB) and checkpoint_completion_target to about 0.8.
Set default_statistics_target to about 100.
Migrate to Enterprise Linux or FreeBSD: Postgres works much better on Unix type systems — Windows support is a recent addition, not very mature.
You can read more on this page: Tuning Your PostgreSQL Server — PostgreSQL Wiki
My experience suggests that (within limits) the hardware is typically the least important factor in database performance.
Assuming that you have enough memory to keep commonly used data in cache, then your CPU speed may vary 10-50% between a top-of the line machine and a common or garden box.
However, a missing index in an important search, or a poorly written recursive trigger could easily make a difference of 1,000% or 10,000% or more in your response times.
Without knowing exactly your table structure and row counts, I think anybody would suggest that your your hardware looks amply sufficient. It is only your database structure which will kill you. :)
UPDATE:
Without knowing the specific queries and your index details, there's not much more we can do. And in general, even knowing the queries, it's often very difficult to optimize without actually installing and running the queries with realistic data sets.
Given the cost of the server, and the cost of your time, I think you need to invest thirty bucks in a book. Then install your database with test data, run the queries, and see what runs well and what runs badly. Fix, rinse, and repeat.
Both of these books are specific to SQL Server and both have high ratings:
http://www.amazon.com/Inside-Microsoft%C2%AE-SQL-Server-2005/dp/0735621969/ref=sr_1_1
http://www.amazon.com/Server-Performance-Tuning-Distilled-Second/dp/B001GAQ53E/ref=sr_1_5