How to reduce the size of javacard applet - applet

I wrote an applet which has 19 KB size on disk. It has three classes. The first one is extended from Applet, the second one has static functions and third one is a class that i create an instance from it in my applet.
I have three questions:
Is there any way to find out how much size is taken by my applet instance in my javacard?
Is there any tool to reduce the size of a javacard applet (.cap file)?
Can you explain rules that help me to reduce my applet size?

Is there any way to find out how much size is taken by my applet instance in my javacard?
(AFAIK) There is no official way to do that (in GlobalPlatform / Java Card).
You can estimate the real memory usage from the difference in free memory before applet loading and after installation (and most likely after personalization -- as you probably will create some objects during the personalization). Some ways to find out free memory information are:
Using JCSystem.getAvailableMemory() (see here) which gives information for all memory types (if implemented).
Using Extended Card Resources Information tag retrievable with GET DATA (see TS 102 226) (if implemented).
Using proprietary command (ask you vendor).
You can have a look inside your .cap file and see the sizes of the parts that are loaded into the card -- this one is surely VERY INACCURATE as card OS is free to deal with the content at its own discretion.
I remember JCOP Tools have some special eclipse view which shows various statistics for the current applet -- might be informative as well.
The Reference Implementation offers an option to get some resource consumption statistics -- might be useful as well (I have never used this, though).
Is there any tool to reduce the size of a javacard applet (.cap file)?
I used ProGuard in the past to improve applet performance (which in fact increased applet size as I used it mostly for method inlining) -- but it should work to reduce the applet size as well (e.g. eliminate dead code -- see shrinking options). There are many different optimizations as well -- just have a look, but do not expect miracles.
Can you explain rules that help me to reduce my applet size?
I would emphasize good design and proper code re-use, but there are definitely many resources regarding generic optimization techniques -- I don't know any Java Card specific ones -- can't help here :(
If you have more applets loaded into a single card you might place common code into a shared library.
Some additional (random) notes:
It might be more practical to just get a card with a larger memory.
Free memory information given by the card might be inaccurate.
I wonder you have problems with your applet size as usually there are problems with transient memory size (AFAIK).
Your applet might be simply leaking memory and thus use more and more memory.
Do not sacrifice security for lesser applet size!
Good luck!

To answer your 3rd Question
3.Can you explain rules that help me to reduce my applet size?
Some basic Guidelines are :
Keep the number of methods minimum as you know we have very limited resources for smart cards and method calling is an overhead so with minimum method calls,performance of the card will increase.Avoid using get/set methods.Recursive calls should also be avoided as the stack size in most of the cards is around 200 Bytes.
Avoid using more than 3 parameters for virtual methods and 4 for static methods. This way, the compiler will use a number or bytecode shortcuts, which reduces code size.
To work on temporary data, one can use APDU buffer or transient arrays as writing on EEPROM is about 1,000 times slower than writing on RAM.
Inheritance is also an overhead in javacard particularly when the hierarchy is complex.
Accessing array elements is also an overhead to card.So, in situations where there is repeated accessing of an array element try to store the element in a local variable and use it.
Instead of doing this:
if (arr[index1] == 1) do this;
OR
if (arr[index1] == 2) do this;
OR
if (arr[index1] == 3) do this;
Do this:
temp = arr[index1];
if (temp == 1) do this;
OR
if (temp == 2) do this;
OR
if (temp == 3) do this;
Replace nested if-else statements with equivalent switch statements as switch executes faster and takes less memory.

Related

Java card applet size

How much of eeprom does JC applet installation use?
Is it the size of CAP file, or how do I find out?
Maybe I could use JCsystem method get available memory, but is there some direct method like some command in JC development tools from oracle?
(I don’t have JCOP)
Haven't found direct method to get size of an applet yet. Here's two ways that I used to get size of an applet
Standard Javacard
In development environment, create an applet that uses JCSystem.getAvailableMemory(JCSystem.MEMORY_TYPE_PERSISTENT) and send APDU to call it before and after install (load) to get the EEP for class, and call again before and after install (install) to get EEP for applet instance.
The limitation is that this method only supports short, which means maximum free memory is 0x7FFF (32767) bytes. You cannot use this method if your applet is bigger than 32767 bytes, and you need to reduce your free memory using dummy applet so that the free memory before install (load) is less than 32767 bytes.
Javacard with GP 2.2 + ETSI support
If your card supports Global Platform 2.2 and ETSI, you can use GET DATA command.
GP card spec 2.2 section 11.3 states that
Tag ‘FF21’: Extended Card Resources Information available for Card Content Management, as defined in ETSI TS 102 226.
And in ETSI 102.226 section 8.2.1.7.2:
After the successful execution of the command, the GET DATA response data field shall be coded as defined in GlobalPlatform [4]. The value of the TLV coded data object referred to in reference control parameters P1 and P2 of the command message is:
Length Description Value
1 Number of installed application tag '81'
1 Number of installed application length X
X Number of installed application
1 Free non volatile memory tag '82'
1 Free non volatile memory length Y
Y Free non volatile memory
1 Free volatile memory tag '83'
1 Free volatile memory length Z
Z Free volatile memory
Use the same steps as in standard Javacard above, but instead of selecting the applet to get free memory and send its command, we directly send the GET DATA commmand. This means one step easier... Also, the response of this command is not restricted to short value, which means you can check applet size that is more than 32767 bytes
The size of some objects depend on the Java Card operating system, so there is not a direct link. Don't forget that the OS may also need to use memory to support your application. Some operating systems also align the data, making it more complex.
So unless you have been provided with tools from your manufacturer, it is hard to be sure. The byte code size could probably be determined, given a byte or two, but I don't know any specific tools for that from the top of my head.
Without tools, JCSystem may be your best bet.

Looking for the best equivalents of prefetch instructions for ia32, ia64, amd64, and powerpc

I'm looking at some slightly confused code that's attempted a platform abstraction of prefetch instructions, using various compiler builtins. It appears to be based on powerpc semantics initially, with Read and Write prefetch variations using dcbt and dcbtst respectively (both of these passing TH=0 in the new optional stream opcode).
On ia64 platforms we've got for read:
__lfetch(__lfhint_nt1, pTouch)
wherease for write:
__lfetch_excl(__lfhint_nt1, pTouch)
This (read vs. write prefetching) appears to match the powerpc semantics fairly well (with the exception that ia64 allows for a temporal hint).
Somewhat curiously the ia32/amd64 code in question is using
prefetchnta
Not
prefetchnt1
as it would if that code were to be consistent with the ia64 implementations (#ifdef variations of that in our code for our (still live) hpipf port and our now dead windows and linux ia64 ports).
Since we are building with the intel compiler I should be able to many of our ia32/amd64 platforms consistent by switching to the xmmintrin.h builtins:
_mm_prefetch( (char *)pTouch, _MM_HINT_NTA )
_mm_prefetch( (char *)pTouch, _MM_HINT_T1 )
... provided I can figure out what temporal hint should be used.
Questions:
Are there read vs. write ia32/amd64 prefetch instructions? I don't see any in the instruction set reference.
Would one of the nt1, nt2, nta temporal variations be preferred for read vs. write prefetching?
Any idea if there would have been a good reason to use the NTA temporal hint on ia32/amd64, yet T1 on ia64?
Are there read vs. write ia32/amd64 prefetch instructions? I don't see any in the instruction set reference.
Some systems support the prefetchw instructions for writes
Would one of the nt1, nt2, nta temporal variations be preferred for read vs. write prefetching?
If the line is exclusively used by the calling thread, it shouldn't matter how you bring the line, both reads and writes would be able to use it. The benefit for prefetchw mentioned above is that it will bring the line and give you ownership on it, which may take a while if the line was also used by another core. The hint level on the other hand is orthogonal with the MESI states, and only affects how long would the prefetched line survive. This matters if you prefetch long ahead of the actual access and don't want to prefetch to get lost in that duration, or alternatively - prefetch right before the access, and don't want the prefetches to thrash your cache too much.
Any idea if there would have been a good reason to use the NTA temporal hint on ia32/amd64, yet T1 on ia64?
Just speculating - perhaps the larger caches and aggressive memory BW are more vulnerable to bad prefetching and you'd want to reduce the impact through the non-temporal hint. Consider that your prefetcher is suddenly set loose to fetch anything it can, you'd end up swamped in junk prefetches that would through away lots of useful cachelines. The NTA hint makes them overrun each other, leaving the rest undamaged.
Of course this may also be just a bug, I can't tell for sure, only whoever developed the compiler, but it might make sense for the reason above.
The best resource I could find on x86 prefetching hint types was the good ol' article What Every Programmer Should Know About Memory.
For the most part on x86 there aren't different instructions for read and write prefetches. The exceptions seem to be those that are non-temporal aligned, where a write can bypass the cache but as far as I can tell, a read will always get cached.
It's going to be hard to backtrack through why the earlier code owners used one hint and not the other on a certain architecture. They could be making assumptions about how much cache is available on processors in that family, typical working set sizes for binaries there, long term control flow patterns, etc... and there's no telling how much any of those assumptions were backed up with good reasoning or data. From the limited background here I think you'd be justified in taking the approach that makes the most sense for the platform you're developing on now, regardless what was done on other platforms. This is especially true when you consider articles like this one, which is not the only context where I've heard that it's really, really hard to get any performance gain at all with software prefetches.
Are there any more details known up front, like typical cache miss ratios when using this code, or how much prefetches are expected to help?

What makes NSdata advantageous?

I've been looking through the apple documentation for the NSdata class, and I didn't really find it too enlightening. I know how to use the class but I don't really understand the gravity of the advantages that it may or may not provide. I know its a simple question but perhaps it would be good to have such information as a reference.
Advantages over what? Certainly, it's useful to represent an arbitrary block of data as an object just as it's useful to represent a string, a number, or a value as an object. Memory management becomes simpler and is consistent with memory management for all other objects, and there are a number of useful methods defined.
Say you want to read a binary file into memory. We won't worry about the reasons why -- there are as many reasons as there are data file formats. You'll have to:
Check the size of the file
Allocate a block of memory of the proper size
Open the file
Read the contents into memory
Close the file
Remember to free the memory when you're done with it (a condition that can sometimes be tricky to detect)
(Optional) Worry about whether the block of memory has been modified
With NSData, you can just create a new instance from a path or URL and not have to think about the rest.

How to determine SSE prefetch instruction size?

I am working with code which contains inline assembly for SSE prefetch instructions. A preprocessor constant determines whether the instructions for 32-, 64- or 128-bye prefetches are used. The application is used on a wide variety of platforms, and so far I have had to investigate in each case which is the best option for the given CPU. I understand that this is the cache line size. Is this information obtainable automatically? It doesn't seem to be explicitly present in /proc/cpuinfo.
I think your question is related to this question or this one. I think it is clear that - unless you can rely on a OS or library-function - you will want to use the CPUID instruction, but the question then becomes exactly what information you are looking for. - And of course, AMD's and Intel's implementations don't need to agree. This page suggests using Cpuid.1.EBX[15:8] (i.e., BH) for finding out on Intel and function 80000005h on AMD. In addition, on Intel, CPUID.2... seems to contain the relevant information, but it looks like a real pain to parse out the desired information.
I think, from what I've read, both AMD and Intel CPUID instructions will support CPUID.1.EBX[15:8], which returns the size of one cache line in QUADWORDs as used by the CLFLUSH instruction (which isn't present on all processors, so I don't know whether you'll always find something there). So, after executing CPUID.1, you'd have to multiply BH by 8 to get the cache line size in bytes. This hinges on my implicit assumption (please can anyone say whether it is really valid?) that the definition of one cache line size is always the same for CLFLUSH and PREFETCHh instructions.
Also, Intel's manuals states that PREFETCHh is only a hint, but that, if it prefetches anything, it will always be a minimum of 32 bytes.
EDIT1:
Another useful resource (even if not directly answering your question) for the optimised use of PREFETCHh is Intel's optimisation manual here.

The stack size used in kernel development

I'm developing an operating system and rather than programming the kernel, I'm designing the kernel. This operating system is targeted at the x86 architecture and my target is for modern computers. The estimated number of required RAM is 256Mb or more.
What is a good size to make the stack for each thread run on the system? Should I try to design the system in such a way that the stack can be extended automatically if the maximum length is reached?
I think if I remember correctly that a page in RAM is 4k or 4096 bytes and that just doesn't seem like a lot to me. I can definitely see times, especially when using lots of recursion, that I would want to have more than 1000 integars in RAM at once. Now, the real solution would be to have the program doing this by using malloc and manage its own memory resources, but really I would like to know the user opinion on this.
Is 4k big enough for a stack with modern computer programs? Should the stack be bigger than that? Should the stack be auto-expanding to accommodate any types of sizes? I'm interested in this both from a practical developer's standpoint and a security standpoint.
Is 4k too big for a stack? Considering normal program execution, especially from the point of view of classes in C++, I notice that good source code tends to malloc/new the data it needs when classes are created, to minimize the data being thrown around in a function call.
What I haven't even gotten into is the size of the processor's cache memory. Ideally, I think the stack would reside in the cache to speed things up and I'm not sure if I need to achieve this, or if the processor can handle it for me. I was just planning on using regular boring old RAM for testing purposes. I can't decide. What are the options?
Stack size depends on what your threads are doing. My advice:
make the stack size a parameter at thread creation time (different threads will do different things, and hence will need different stack sizes)
provide a reasonable default for those who don't want to be bothered with specifying a stack size (4K appeals to the control freak in me, as it will cause the stack-profligate to, er, get the signal pretty quickly)
consider how you will detect and deal with stack overflow. Detection can be tricky. You can put guard pages--empty--at the ends of your stack, and that will generally work. But you are relying on the behavior of the Bad Thread not to leap over that moat and start polluting what lays beyond. Generally that won't happen...but then, that's what makes the really tough bugs tough. An airtight mechanism involves hacking your compiler to generate stack checking code. As for dealing with a stack overflow, you will need a dedicated stack somewhere else on which the offending thread (or its guardian angel, whoever you decide that is--you're the OS designer, after all) will run.
I would strongly recommend marking the ends of your stack with a distinctive pattern, so that when your threads run over the ends (and they always do), you can at least go in post-mortem and see that something did in fact run off its stack. A page of 0xDEADBEEF or something like that is handy.
By the way, x86 page sizes are generally 4k, but they do not have to be. You can go with a 64k size or even larger. The usual reason for larger pages is to avoid TLB misses. Again, I would make it a kernel configuration or run-time parameter.
Search for KERNEL_STACK_SIZE in linux kernel source code and you will find that it is very much architecture dependent - PAGE_SIZE, or 2*PAGE_SIZE etc (below is just some results - many intermediate output are deleted).
./arch/cris/include/asm/processor.h:
#define KERNEL_STACK_SIZE PAGE_SIZE
./arch/ia64/include/asm/ptrace.h:
# define KERNEL_STACK_SIZE_ORDER 3
# define KERNEL_STACK_SIZE_ORDER 2
# define KERNEL_STACK_SIZE_ORDER 1
# define KERNEL_STACK_SIZE_ORDER 0
#define IA64_STK_OFFSET ((1 << KERNEL_STACK_SIZE_ORDER)*PAGE_SIZE)
#define KERNEL_STACK_SIZE IA64_STK_OFFSET
./arch/ia64/include/asm/mca.h:
u64 mca_stack[KERNEL_STACK_SIZE/8];
u64 init_stack[KERNEL_STACK_SIZE/8];
./arch/ia64/include/asm/thread_info.h:
#define THREAD_SIZE KERNEL_STACK_SIZE
./arch/ia64/include/asm/mca_asm.h:
#define MCA_PT_REGS_OFFSET ALIGN16(KERNEL_STACK_SIZE-IA64_PT_REGS_SIZE)
./arch/parisc/include/asm/processor.h:
#define KERNEL_STACK_SIZE (4*PAGE_SIZE)
./arch/xtensa/include/asm/ptrace.h:
#define KERNEL_STACK_SIZE (2 * PAGE_SIZE)
./arch/microblaze/include/asm/processor.h:
# define KERNEL_STACK_SIZE 0x2000
I'll throw my two cents in to get the ball rolling:
I'm not sure what a "typical" stack size would be. I would guess maybe 8 KB per thread, and if a thread exceeds this amount, just throw an exception. However, according to this, Windows has a default reserved stack size of 1MB per thread, but it isn't committed all at once (pages are committed as they are needed). Additionally, you can request a different stack size for a given EXE at compile-time with a compiler directive. Not sure what Linux does, but I've seen references to 4 KB stacks (although I think this can be changed when you compile the kernel and I'm not sure what the default stack size is...)
This ties in with the first point. You probably want a fixed limit on how much stack each thread can get. Thus, you probably don't want to automatically allocate more stack space every time a thread exceeds its current stack space, because a buggy program that gets stuck in an infinite recursion is going to eat up all available memory.
If you are using virtual memory, you do want to make the stack growable. Forcing static allocation of stack sized, like is common in user-level threading like Qthreads and Windows Fibers is a mess. Hard to use, easy to crash. All modern OSes do grow the stack dynamically, I think usually by having a write-protected guard page or two below the current stack pointer. Writes there then tell the OS that the stack has stepped below its allocated space, and you allocate a new guard page below that and make the page that got hit writable. As long as no single function allocates more than a page of data, this works fine. Or you can use two or four guard pages to allow larger stack frames.
If you want a way to control stack size and your goal is a really controlled and efficient environment, but do not care about programming in the same style as Linux etc., go for a single-shot execution model where a task is started each time a relevant event is detected, runs to completion, and then stores any persistent data in its task data structure. In this way, all threads can share a single stack. Used in many slim real-time operating systems for automotive control and similar.
Why not make the stack size a configurable item, either stored with the program or specified when a process creates another process?
There are any number of ways you can make this configurable.
There's a guideline that states "0, 1 or n", meaning you should allow zero, one or any number (limited by other constraints such as memory) of an object - this applies to sizes of objects as well.