Dusk of an age: Apple WWDC post-keynote predictions

Well, the Jobs reality-distortion field was in full swing again. The presentation was actually quite impressive, and the slightly ironic touch “this announcement should be news to almost all of you.. unless you read the Wall Street Journal” was amusing.

The performance of Rosetta JIT-ing PPC code was quite impressive, and the fact that it just works – running MS Office 2004 and Adobe Photoshop – was astounding (Jobs made another ironic gesture by tapping impatiently waiting for Photoshop to load its plugins). We don’t know how many times the two applications have been run, and how much instructions have been pre-cached, but I guess this is what Digital’s FX! 32 must have looked like. Now that’s one company that was the Xerox PARC of the ’80s-’90s: brilliant engineers, terrible management.

So, with the impending transition of Apple to Intel processors, what does it mean for other players in both the PPC and x86 camps? Here are some thoughts:

  • Sayonara, Yellow Dog Linux.Terra Soft, the parent company, might survive as a High Performance Computing vendor specializing in IBM PPC64 solutions, but seeing as Red Hat already partners with IBM, and with their engineers working on the GCC compiler for PPC64, there’s stiff competition there.
  • Is Intel really planning something big? Jobs tellingly focused on projected mid-2006 performance-per-watt figures, and Intel’s Paul Otellini made a self-effacing presentation showing Apple’s 1996 TV ad showing the Intel bunny man on fire, and interpreting it as a message from Apple that Intel CPUs need to run cooler. Since I can’t see Pentium 4s running efficiently anytime soon – and that dual core Pentium D is just a kludgy hack, forcing Intel to price their fastest non-EE Pentium D below AMD’s cheapest Athlon 64 X2 – that means.. dual-core Pentium M chips with x86-64 extensions? If the Israeli team works on it, it might actually end up looking good. They might want to redesign the FSB, though – copy AMD by integrating the memory controller on-die, and letting the two cores talk to each other without going through the FSB and back? They already pay AMD to license the 64-bit extensions anyway.
  • Why Intel? Power usage, assuming they are going to use Pentium Ms, and production capabilities. Though Intel chips are still not using SoI, so in that respect IBM (PowerPC) and AMD (Opteron, Athlon 64) arguably have a leg up. But it’s jarring to see Jobs comparing the two architectures as PowerPC vs Intel .. guys, let’s call an apple an apple (umm..) and acknowledge AMD’s contribution there. Debian actually calls the platform AMD64, and even Linus was known to be annoyed when Intel launched their “IA-32e” platform. You’re talking to your developers, it’s not like they’d get confused or anything. Guess the chance of us seeing some AMD-Apple collaboration is pretty low here, considering their focus on the Intel branding. Intel executives must be quite happy, after IBM’s PR wins in the console market.
  • Who wrote Rosetta? Presumably Transitive, and their low-key behind the scene approach probably explains why they were not named directly. Considering Rosetta was pretty much the coolest part of the keynote, probably as well for Jobs. It is interesting that the blurb on the site includes

    “Transitive expects to announce that a second computer OEM will deploy products enabled by its technology during the 1st half of 2005 and that others will deploy QuickTransit before the end of the year. Unfortunately, strict confidentiality obligations prevent us from discussing these relationships in any detail.”

    Timing sounds about right..

  • People speculating on running OS X on generic hardware are probably (slightly) deluded. I can see the technical possibility of running OS X on a suitably modified virtualizer, like VMware – the changes required might be as little as having a suitable ID reported by the BIOS – but a commercial solution will never be made available. The PearPC team’s job has just been made much simpler though.
  • Universal Binaries. Guess fat binaries don’t sound as cool. Oh well. Not a new feature, guys – NEXTSTEP did it (though of course NEXTSTEP is OS X’s older brother). Even Mac OS did it. And, if you’re on a Unix/X11 platform, ROX does it too.

The transition being stretched to several years is good news though. I’ll probably ditch my iBook – Linux desktops are looking pretty much almost there (wireless configuration, hardware management), and I like the feeling of helping out push a free solution rather than selfishly buying into the advance guard. And trying out Gtk#/Mono and Java-Gnome apps are much less convenient on a Mac!

So if anyone wants an iBook G4 1GHz, 768 MB RAM, in pristine condition around August 2004, let me know. If you want to wait for the eBay auction, that’s cool too.

Categories: , , , , ,

Static (lexical) vs dynamic scoping

Eric and I were discussing scoping in Scheme and Python earlier today, our third over the past few weeks – and we finally nailed it shut. The first time he brought up dynamic scoping in Common Lisp and how Prof. Friedman dislikes it; the second was on how Python appears to have dynamic scoping (which turns out to be true, pre-Python 2.2), and now, thanks to Wikipedia, I think we have it right.

Provided Eric gets the H211 Introduction to Programming (Honors) class, which is in Python, and I get the C211 Introduction to Programming (Scheme), our discussion should stand us in good steed, though funnily today I played the Python guy and he played the Scheme one.

I’m going to show the examples, both in Scheme and Python; the first one in each section would appear to show that the language in question features dynamic scoping, which is incorrect as both actually do lexical scoping.

Scheme:
Bad:

(let ((pi 3.1415))
 (define area
  (lambda (r)
   (* pi r r)))
 (display (area 10))
 (newline)
 (set! pi 3)
 (display (area 10)))

Good:

(let ((pi 3.1415))
 (define area
  (lambda (r)
   (* pi r r)))
 (display (area 10))
 (newline)
 (let ((pi 3))
  (display (area 10))))

Python:
Bad:

pi = 3.1415
def area(r):
 return pi*r*r
area(10)
pi = 3
area(10)

Good:

pi = 3.1415
pi_holder = 10
def create_area():
 pi_holder = pi  # local pi_holder, different from pi_holder outside
 def area(r):
  return pi_holder*r*r
 return area
area = create_area()
area(10)    # 314.15
pi_holder    # Still 10
pi_holder = 3
area(10)    # Still 314.15

The above works, but is a bit problematic. I introduced pi_holder=10 to show that, (1), pi_holder inside of create_area() is a local variable; (2), that this local pi_holder is the one that is in area‘s scope, and thus changing the value of pi_holder does not affect it.

Isn’t it easier to just do pi = pi ? Well, that does not work. My initial hunch was that Python reads the LHS of the expression, decided pi has been redeclared as a local variable, and thus since it’s not been initialized it got confused trying to assign it the value of itself. But it’s actually worse; this code does not work either:

x = 42
def local_var_test():
 temp = x
 print temp  # 42
 x = temp
 print x   # 42?

Surprise! Python won’t let you do that either. Take out the last two lines and the code works, though. Basically, if in the block a variable is declared anywhere, it is a local variable everywhere in that block, and trying to refer to a variable declared in the surrounding scope, even before the local declaration, will fail.

But this is where default parameters come in handy. A better way to rewrite the clunky code above is as follows:

pi = 3.1415
def create_area(pi = pi):
 def area(r):
  return pi*r*r
 return area
area = create_area()
area(10)    #314.15
pi = 3
area(10)    #314.15

So Python has static scoping after all. The thing to bear in mind is that Scheme functions are named closures, while Python functions inherit the surrounding scope, so to freeze the variables you depend on you have to wrap your function definition inside another function that copies in the values you need into its local variables.

References:

And the funny thing is, I started the day trying to find good dynamic languages that run on the Java platform (platform envy, I guess, since .NET more prominently touts its language neutrality). Sun’s finally catching up, though – Tim Bray wrote a few months back about the Coyote project to support dynamic languages in Sun’s open source IDE, NetBeans, and pointed to an interesting Sun-developed scripting language, Pnuts. Which reminded me of Groovy and Boo.

Googling for groovy boo .net – Groovy being a Ruby-like scripting language for Java that received a lot of attention a few months ago, and then taken some flak over its development model, and Boo being the Python-like language for .NET – yields this very interesting Slashdot discussion that led me to such intriguing functional OO languages as Scala and Nice. .NET fans do not get to have all the fun!

Groovy, on the other hand, seems rather disappointing. Oh well. Scala looks more like Haskell, but with dynamic type inference (like Boo).. yay!

Update2005/06/06

Realized a few days ago, but haven’t gotten round to posting about it, that I was unfairly comparing Scheme and Python, and that Python methods are closures in themselves. Note:

pi = 3.1415
def area(r):
 return pi*r*r
print area(10)  # 314.15
def test():
 pi = 3
 print area(10) # 314.15
test()

In the earlier example, overriding the value of pi with pi = 3 is the equivalent of doing (set! pi 3) in Scheme, i.e. it will change the value of the variable that both the top-level pi, which is the one that area knows. In a dynamic scope, which uses a stack to figure out which value assignment should apply, pi = 3 would affect the call to area just after it.