I am considering using large numbers of gensyms to differentiate between objects in a system I'm building (like refs in erlang).
Should I expect to run into system limits after creating large numbers of gensyms?
For reference, I'm using SBCL.
Different implementations use different amount of memory. From just testing the number of bytes used by gensym it is dependent on the argument you pass it and how unique that is from previous rounds..
If you have a macro that always pass a fixed number of strings to gensym it will use 0,5-1,5kB per. For every consecutive using the same argument its down to 65-150 bytes or so.
I had it make 65 byte gensyms for a while and stopped it well above 4 billions, but I don't know if that qualify since "large" is ambiguous.
Related
I am reading through a tutorial on Python and its says "There is a variant xrange() which avoids the cost of building the whole list for performance sensitive cases (in Python 3000, range() will have the good performance behavior and you can forget about xrange())." Source.
I am asking about Python 2.x and not Python 3.
I'm not sure what this means. Am I to understand that range(a, b) creates a list of all the values from a to b and then iterates over them, while xrange(a, b) only creates each value as it iterates over? If this is the case then performance is only improved if code does not actually iterate over the entire list and breaks early.
Can someone comment on this?
You understood correctly.
The difference resides in the memory : when using range() the entire list will be allocated in memory whereas xrange() returns a generator (actually it returns an xrange object which acts as a generator)
The Python's documentation about xrange() is quite clear on this subject:
The advantage of xrange() over range() is minimal (since xrange() still has to create the values when asked for them) except when a very large range is used on a memory-starved machine or when all of the range’s elements are never used
In python3, range() doesn't generate the whole sequence at once, so it's also performant:
The advantage of the range type over a regular list or tuple is that a range object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the start, stop and step values, calculating individual items and subranges as needed).
Documentation
What is a good Amazon Redshift column encoding for a VARCHAR column where each row contains a short (usually 50-100 characters) value that contains little repetition, but for which there is a high degree of similarity across the rows? (Identical prefixes, in particular.)
The maddeningly terse LZO description makes it sound like LZO is applied individually to each value. In that case, there will be no shared dictionary across the rows and little commonality to exploit. OTOH, if the LZO is applied to an entire 1 MB block of values written to disk, it would perform well.
Byte Dictionary sounds like it only yields savings when the values are identical rather than similar, so not a good option.
Compression is applied per block, which means that LZO is almost always the right choice for VARCHAR. Most of the other alternatives require the values to be either completely identical to other values (e.g. BYTEDICT, RUNLENGTH), or be numeric (e.g. DELTA, MOSTLY8).
The only other alternative for VARCHARS is TEXT255/TEXT32K, which might work for your use case. They build dictionaries of the first N words (245 for TEXT255 and variable for TEXT32K) and replaces occurrences of these words with a one byte index. If your values share a lot of words then TEXT255 might work better than LZO.
One of the objectives of DHT is to partition the keyspace, so each node (or group of them) has a share of it. To do so, it hashes the filename of a file that wants to be saved and stores it in the node responsible of this part of the network. But, why does it have to hash the filename? Couldn't it just work like a dictionary, so instead of having a node hold hash values between 0000 and 0a2d, it would hold filename values between C and E?
But, why does it have to hash the filename?
It doesn't have to be a filename. It can hash other things too. E.g. file contents. Or metadata. Or cryptographic keys used as identities of users in the network.
Couldn't it just work like a dictionary, so instead of having a node hold hash values between 0000 and 0a2d, it would hold filename values between C and E?
Because filenames are not uniformly distributed throughout the possible keyspace (how often do you see filenames starting with some exotic unicode character?) and their entropy is spread over a variable length, leading to even more clustering at the top level.
If you were to index all existing unix filesystems in the world you would have massive clustering around the /etc/... prefix for example.
There are other p2p network overlays that can deal with heavy clustering in the keyspace, often by rearranging the nodes around the hotspots to increase network capacity in regions of the affected keyspace, e.g. based on levenshtein distance, but they generally aren't distributed hash tables because they do not employ hashing.
because searches are done on numbers.
When you hash a file, you end up with a number, and that number will be allocated in the nearest K-buckets of the nearest K-peers.
names are irrelevant, you're performing XOR searches on numeric spaces, so that you always search half of the space on every hop.
once you find a peer that has the bucket pointed by the hash, then you can communicate with that peer and exchange related information.
A DHT, like libtorrent's kademlia implementation has to be seen more of a distributed routing data structure. The problem you're solving is how do I find a number among billions of numbers, how do I find a peer among millions in the least amounts of hops possible, and the answer is that every node on the network has to follow a set of simple rules as to how to organize the numbers they're storing, and the peers that they know about.
I recommend you read these notes on how a real DHT actually works.
https://gist.github.com/gubatron/cd9cfa66839e18e49846
Also, storing a number takes a lot less space than storing a word.
If you know the word, you can hash the word and search for the hash.
Yes, it could work like a dictionary. However, it would be missing some desirable (for the typical DHT use case) emergent properties that come from using a hash.
One property that hashing (along with XOR distance metric) gives you is an even distribution of content amongst all the nodes participating in a DHT. "Even" here being caveated by how the k-bucket data structure works (here's an overview k-bucket slides), but in aggregate, you get nodes evenly distributing data amongst the DHT peers.. in theory. In practice, you can get hotspots.
Another property of using a hash is if you're looking for a file with specific contents. So, if you use hashes of the file contents as the identifiers, you can be... statistically sure (the guarantee comes from your hash function collision properties) that you're getting the contents you're looking for. Relying on a filename introduces a level of indirection that can serve different contents for the same file. Depending on your use case, that's acceptable or not.
I've considered what you're proposing before as a prefix to a SHA-1 hash. So, something like node1-cd9cf... (the prefix could be anything really, doesn't need to be human readable). This would ensure that all the things with that prefix end up pretty much on a node that identifies itself with an id starting with "node1-". But, you'd have to have a DHT implementation (including k-bucket implementation) that supports variable length ids. In this case, you're guaranteeing a hotspot. It's an equivalent of artificially ensuring that things are "close together" as in the difference between them in the XOR metric is very small. Why would anyone want to do this? For example: com.example.www-cd9cf... combined with some crypto could ensure that while you're participating in a DHT, the data is stored on your servers. I haven't seen this implemented before though.
I'm creating an Ada program for Windows that needs to be able to pass strings to some functions written in C. Until now I have been manipulating the strings in Ada using the Unbounded_String type, and then converting the data to an Interfaces.C.char_array before passing it to the C functions.
This works fine, only performance is a bit of an issue on slower, older computers. The C function is sometimes called repeatedly on a slightly modified version of a string, and requires the Unbounded_String to be converted to a similar char_array every time. The strings aren't modified by the C functions, so the only ever have to be converted to char_array.
I have thought of storing the strings in char_array, and converting from an Ada type each time the string is manipulated. The data is passed to C more often than it is changed, so it would improve performance. The problem with this approach is that often the length of the string will change, sometimes by a lot, and there is no way of knowing the maximum length beforehand.
The ideal solution would be to have something similar to an Unbounded_String only storing the string as a char_array. By this I mean something that is dynamically sized, allocating a new array when the old one isn't big enough and it should allow Ada Characters/Strings to be inserted (and also removed) into the array, converting only those characters to C chars.
Is there any (relatively) easy, fast way of doing this without having to implement it myself? Or is there any other quick way of manipulating C-compatible strings in Ada? Thanks in advance for any suggestions.
You don't mention how many objects you expect to have of your type, but I will assume that we are not talking about so many that you will be anywhere near exhausting your available address space.
Just encapsulate a sufficiently large char_array (say 10 times the largest expected size) in a private record, and create the needed operations to manipulate it.
If you're very unlucky, you may need to tell your compiler/run-time environment that you need an unusually large stack, but save that worry for when you actually experience it.
Just as a load test, I was playing with different data structures in Scala. Just wondering what it takes to work or even create a one billion length array. 100 million seems to be no problem, of course there's no real magic about the number 1,000,000,000. I'm just seeing how far you can push it.
I had to bump up memory on most of the tests. export JAVA_OPTS="-Xms4g -Xmx8g"
// insanity begins ...
val buf = (0 to 1000000000 - 1).par.map { i => i }.toList
// java.lang.OutOfMemoryError: GC overhead limit exceeded
However preallocating an ArrayInt works pretty well. It takes about 9 seconds to iterate and build the object. Interestingly, doing almost anything with ListBuffer seems to automatically take advantage of all cores. However, the code above will not finish (at least with 8gb Xmx).
I understand that this is not a common case and I'm just messing around. But if you had to pull some massive thing into memory, is there a more efficient technique? Is Array with type as efficient as it gets?
The per-element overhead of a List is considerable. Each element is held in a cons cell (case class ::) which means there is one object with two fields for every element. On a 32-bit JVM that's 16 bytes per element (not counting the element value itself). On a 64-bit JVM it's going to be even higher.
List is not a good container type for extremely large contents. Its primary feature is very efficient head / tail decomposition. If that's something you need then you may just have to deal with the memory cost. If it's not, try to choose a more efficient representation.
For what it's worth, I consider memory overhead considerations to be one thing that justifies using Array. There are lots of caveats around using arrays, so be careful if you go that way.
Given that the JVM can sensibly arrange an Array of Ints in memory, if you really need to iterate over them it would indeed be the most efficient approach. It would generate much the same code if you did exactly the same thing with Java.