I am testing G Suite Add On and for this purpose, I need to know maximum character limit of Property value.
As the documentation here says, quota limit for property value size is 9 KB. As the size of char vary in different language, what value of char size shall I take to calculate character limit here.
You can estimate that 1 KB is around 1024 characters.
This means that 9 KB corrsponds to around 9216 characters.
Related
Suppose that we have a cpu with cache that consists of 128 blocks. 8 bytes of memory can be saved to each block.How can I find which block each address belongs to? Also what is each address' tag?
The following is my way of thinking.
Take the 32bit address 1030 for example. If I do 1030 * 4 = 4120 I have the address in a byte format. Then I turn it in a 8byte format 4120 / 8 = 515.
Then I do 515 % 128 = 3 which is (8byte address)%(number of blocks) to find the block that this address is on (block no.3).
Then I do 515 / 128 = 4 to find the possition that the address is on block no.3. So tag = 4.
Is my way of thinking correct?
Any comment is welcomed!
What we know generically:
A cache decomposes addresses into fields, namely: a tag field, an index field, and a block offset field. For any given cache the field sizes are fixed, and, knowing their width (number of bits) allows us decompose an address the same way that cache does.
An address as a simple number:
+---------------------------+
| address |
+---------------------------+
We would view addresses as unsigned integers, and the number of bits used for the address is the address space size. As decomposed into fields by the cache:
+----------------------------+
| tag | index | offset |
+----------------------------+
Each field uses an integer number of bits for its width.
What we know from your problem statement:
the block size is 8 bytes, therefore
the block offset field width is log2( block size in bytes )
the address space (total number of bit in an address) is 32 bits, therefore
tag width + index width + offset width = 32
Since information about associativity is not given we should assume the cache is direct mapped. No information to the contrary is provided, and direct mapped caches are common early in coursework. I'd verify or else state the assumption explicitly of direct mapped cache.
there are 128 blocks, therefore, for a direct mapped cache
there are 128 index positions in the cache array.
(for 2- way or 4- way we would divide by 2 or 4, respectively)
Given 128 index positions in the cache array
the index field width is log2( number of index positions )
Knowing the index field width, the block offset field width, and total address width, we can compute the tag field width
tag field width = 32 - index field width - block offset field width
Only when you have such field widths does it make sense to attempt to decode a given address and extract the fields' actual values for that address.
Because there are three fields, the preferred approach to extraction is to simply write out the address in binary and group the bits according to the fields and their widths.
(Division and modulus can be made to work but with (a) 3 fields, and (b) the index field being in the middle using math there is a arguable more complex, since to get the index we have to divide (to remove the block offset) and modulus (to remove the tag bits), but this is equivalent to the other approach.)
Comments on your reasoning:
You need to know if 1030 is in decimal or hex. It is unusual to write an addresses in decimal notation, since hex notation converts into binary notation (and hence the various bit fields) so much easier. (Some educational computers use decimal notation for addresses, but they generally have a much smaller address space, like 3 decimal digits, and certainly not a 32-bit address space.)
Take the 32bit address 1030 for example. If I do 1030 * 4 = 4120 I have the address in a byte format.
Unless something is really out of the ordinary, the address 1030 is already in byte format — so don't do that.
Then I turn it in a 8byte format 4120 / 8 = 515.
The 8 bytes of the cache make up the block offset field for decoding an address. Need to decode the address into 3 fields, not necessarily divide it.
Again the key is to first compute the block size, then the index size, then the tag size. Take a given address, convert to binary, and group the bits to know the tag, index, and block offset values in binary (then maybe convert those values to hex (or decimal if you must)).
I cannot find the maximum size of the symbol data type in KDB+.
Does anyone know what it is?
If youa re talking the physical length of a symbol, well symbols exist as interred strings in kdb, so the maximum string length limit would apply. As strings are just a list of characters in kdb, the maximum size of a string would be the maximum length of a list. In 3.x this would be 264 - 1, In previous versions of kdb this limit was 2,000,000,000.
However there is a 2TB maximum serialized size limit that would likely kick in first, you can roughly work out the size of a sym by serializing it,
q)count -8!`
10
q)count -8!`a
11
q)count -8!`abc
13
So each character adds a single byte, this would give a roughly 1012 character length size limit
If you mean the maximum amount of symbols that can exist in memory, then the limit is 1.4B.
Most posts i read just give info about the maximum file name length's. But, i want to understand why there's this limit. Why can't file name's be big. I see that few file systems have put a limit of 255 bytes. Why not 1 MB or anything more than 255 bytes. I probably would never have a file name of length more than 100 characters. But, this question is about why the limit?
long file name costs much more space and time than you can imagine
the 255 bytes limit of file name length is a long time trade off between human
onvenience and space/time efficiency
and backward compatibility , of course
back to the dark old days , the capacity of hard drive capacity was count by MB or a few GB
file name are often stored in some fixed length C structs ,
and the size of the struct was mostly round by the factor of 512 byte,
which is the size of a physical sector ,so that it can be read out by a single touch of the head
if the file system put a limit of 1MB on filename, it would run out of harddisk space with only a few hundred files. and memory limits also applys.....
WinDbg has a range limit applied for the d-command series. According to the documentation, the limit is at 256 MB. This limit can be bypassed using the L? syntax.
L? Size (with a question mark) means the same as LSize, except that L?
Size removes the debugger's automatic range limit. Typically, there is
a range limit of 256 MB, because larger ranges are typographic errors.
If you want to specify a range that is larger than 256 MB, you must
use the L? Size syntax.
However, I tried to do a
du 3ddabac0+8 L 0n6518040
which is only 6.5 MB and it says
Range error in 'du 3ddabac0+8 l 0n6518040.
The real limit in WinDbg 6.3 is 512kB. Starting from 0x80001 or 0n524289 you need to use L? to bypass the limit.
I'm writing a MATLAB program that will generate a matrix with 1 million rows and an unknown amount of columns (at max 1 million).
I tried pre-allocating this matrix:
a=zeros(1000000,1000000)
but I received the error:
"Maximum variable size allowed by the program is exceeded."
I have a feeling that not pre-allocating this matrix will seriously slow the code down.
This made me curious: What is the maximum array size in MATLAB?
Update: I'm going to look into sparse matrices, because the result I am aiming for in this particular problem will be a matrix consisting for the larger part of zeros.
Take a look at this page, it lists the maximum sizes: Max sizes
It looks to be on the order of a few hundred million. Note that the matrix you're trying to create here is: 10e6 * 10e6 = 10e12 elements. This is many orders of magnitude greater than the max sizes provided and you also do not have that much RAM on your system.
My suggestion is to look into a different algorithm for what you are trying to accomplish.
To find out the real maximum array size (Windows only), use the command user = memory. user.maxPossibleArrayBytes shows how many bytes of contiguous RAM are free. Divide that by the number of bytes per element of your array (8 for doubles) and you know the max number of elements you can preallocate.
Note that as woodchips said, Matlab may have to copy your array (if you pass by value to a subfunction, for example). In my experience 75% of the max possible array is usually available multiple times.
The Limits
There are two different limits to be aware of:
Maximum array size (in terms of number of elements) allowed by MATLAB, regardless of current memory availability.
Current bytes available for a single array -- the (current) maximum possible array size in bytes.
The first limit is what causes "Maximum variable size allowed by the program is exceeded", not the second limit. However the second one is also a practical limit of which you must be aware!
Checking the Limits
The maximum number of elements allowed for an array is checked as follows:
>> [~,maxsize] = computer
maxsize =
2.8147e+14
According to the documentation for the computer command, this returns:
maximum number of elements allowed in a matrix on this version of MATLAB
This is a static MATLAB limit on number of elements, not affected by the state of the computer (hardware specs and current memory usage). And at over 2 petabytes for a double array of that length, it's also way higher than any computer of which I am aware!
On the other hand, the largest practical array size that you can create at any given moment can be checked by the memory command:
>> memory
Maximum possible array: 35237 MB (3.695e+10 bytes) *
Memory available for all arrays: 35237 MB (3.695e+10 bytes) *
Memory used by MATLAB: 9545 MB (1.001e+10 bytes)
Physical Memory (RAM): 24574 MB (2.577e+10 bytes)
* Limited by System Memory (physical + swap file) available.
As the message says, these values are based on actual current memory availability, taking into account both physical memory and the swap file (collectively, virtual memory).
If needed, these values can accessed programmatically by m = memory;.
Adjusting the Limits
The first limit (the hard limit) has been fixed up until R2015a, where it can now be changed (but only reduced to a fraction of system memory) through the following setting:
You can't increase it beyond your system limits.
The second limit obviously has no "setting" in MATLAB since it's based on available memory and computer configuration. Aside from adding RAM, there's not a lot you can do: (1) pack to consolidate workspace memory and perform "garbage collection", but this may only help on certain platforms, and (2) increasing page file size to allow other stuff to swap out and give MATLAB more physical memory. But be cautious when relying on your page file as your computer may become unresponsive if page file thrashing happens.
In older versions of Matlab that don't include the memory command, you can use:
feature memstats
Physical Memory (RAM):
In Use: 738 MB (2e2c3000)
Free: 273 MB (11102000)
Total: 1011 MB (3f3c5000)
Page File (Swap space):
In Use: 1321 MB (529a4000)
Free: 1105 MB (45169000)
Total: 2427 MB (97b0d000)
Virtual Memory (Address Space):
In Use: 887 MB (37723000)
Free: 1160 MB (488bd000)
Total: 2047 MB (7ffe0000)
Largest Contiguous Free Blocks:
1. [at 4986b000] 197 MB ( c585000)
2. [at 3e1b9000] 178 MB ( b2a7000)
3. [at 1f5a0000] 104 MB ( 6800000)
4. [at 56032000] 77 MB ( 4d3e000)
5. [at 68b40000] 70 MB ( 4660000)
6. [at 3a320000] 54 MB ( 3610000)
7. [at 63568000] 45 MB ( 2d48000)
8. [at 35aff000] 40 MB ( 2821000)
9. [at 60f86000] 37 MB ( 25ca000)
10. [at 6f49d000] 37 MB ( 25b3000)
======= ==========
842 MB (34ac0000)
ans =
207114240
You can't suppress the output, but it returns the largest memory block available ( 207,114,240 Bytes / 8 = 25,889,280 doubles )