In an attempt to investigate the performance of a trampolined interpreter on the JVM, I discovered something interesting.
First, some background. In essence, trampolining is a strategy used to avoid stack overflow on platforms (like the JVM) that save to the stack even when not necessary , e,g while doing a tail recursive function call. So after doing a CPS transform and then trampolining it by returning a lambda wrapped "thunk", my interpreter *should have* handled an infinite tail recursive call *without* a stack overflow exception. But it failed anyway!
After Dan Friedman pointed out (by email) that "If you use exceptions in Java, you will regain tail recursion. I think that is the only way to get around Java. ", I did some digging and found something even more interesting.
Tail recursive (java) functions are properly handled by some java JVMs (specifically the JITs) and not others! See this article for a good explanation.
Try running Listings 4 and 5 (from the referenced article) on your JVM :-D . I suspect that C# code equivalent to the two listings in the article (numbered 4 and 5) would both run perfectly. Can someone with a Windows machine and dotNet installed let me know if it works? (post a comment here or send me email! Thanks in advance!)
Footnote: the conversion of the tail recursive function (in the article) into the while loop form is one example of how a trampolining interpreter could "break stack build up".
Ravi Mohan's Tech Blog. To read my non technical blog, click here
Saturday, July 12, 2008
Wednesday, July 2, 2008
Directions
This is the first post on my brand new blog. The essential difference between this blog and the old one is a change of "voice" and focus.
Unlike the "catch all" nature of the old blog, this one will have a sharper focus, specifically on things technical (mostly code/mathematics/algorithms etc - but "technical" is used very broadly - I could very well end up dissecting the Black Scholes Equation for pricing derivatives, for example.). I will create a separate blog for non technical ruminations (TBD).
So what's happening on my end? Life is insanely busy (as usual) and I am meeting great people (as usual) and situations (as usual) and the great difficulty (as always) is choosing among the zillion challenges to tackle next. "So little done, so much to do", to quote Cecil Rhodes, but hey it beats a boring, "normal" life any day.
On the technical/programming front, I am investigating the impact of various (language) interpreter designs on execution time. Over the coming weekend, for example, I am trying to get some sense of how much overhead trampolining adds to interpretation time (vs a normal recursive descent parser).
So that is what I'll write about in the next few entries. And then there is the reinforcement learning library I wrote for the good folks in the DRDO (Defence Research Development Organization for non Indian readers). I could probably spin that off into an open source project once I (find some time to) tidy it up a bit.
Then there's the derivation of concurrent algorithms, the relational-algebra to SQL query generator thingy I've been fiddling with off and on for a while now, and thoughts on programming with typed channels and machine learnng algorithms as programming primitives. Lots of interesting things to write about.
Stay tuned.
Hello Word! It is good to be back.
Unlike the "catch all" nature of the old blog, this one will have a sharper focus, specifically on things technical (mostly code/mathematics/algorithms etc - but "technical" is used very broadly - I could very well end up dissecting the Black Scholes Equation for pricing derivatives, for example.). I will create a separate blog for non technical ruminations (TBD).
So what's happening on my end? Life is insanely busy (as usual) and I am meeting great people (as usual) and situations (as usual) and the great difficulty (as always) is choosing among the zillion challenges to tackle next. "So little done, so much to do", to quote Cecil Rhodes, but hey it beats a boring, "normal" life any day.
On the technical/programming front, I am investigating the impact of various (language) interpreter designs on execution time. Over the coming weekend, for example, I am trying to get some sense of how much overhead trampolining adds to interpretation time (vs a normal recursive descent parser).
So that is what I'll write about in the next few entries. And then there is the reinforcement learning library I wrote for the good folks in the DRDO (Defence Research Development Organization for non Indian readers). I could probably spin that off into an open source project once I (find some time to) tidy it up a bit.
Then there's the derivation of concurrent algorithms, the relational-algebra to SQL query generator thingy I've been fiddling with off and on for a while now, and thoughts on programming with typed channels and machine learnng algorithms as programming primitives. Lots of interesting things to write about.
Stay tuned.
Hello Word! It is good to be back.
Subscribe to:
Posts (Atom)