PhotoshopNews.com
Jun 7, 2005

Intel Inside: what does it mean for Photoshop?

With the recent announcement by Apple that they will “transition” from PowerPC to Intel microprocessors, the question is, what does this mean for Photoshop users?

I took an informal survey of several “engineering types” regarding the potential impact-either good or bad-of Apple’s transition plans and the impact on software developers. The single biggest response to a variety of questions was “Dunno”.

That is the current gist of the answer to the question “So, what does this mean for Photoshop Users”? In general, the answer is “we don’t know yet”.

There are some clear hurdles to cross. For example, Photoshop, to be run natively under “Mac for Intel”, will need to be compiled as a “Universal Binary” application within XCode. Anyway you look at it, this is going to be extra work for software engineers. It will not be fun, it will be work. Also, developers who have relied upon CodeWarrior will be forced to migrate to XCode. Metrowerks has already sold off the rights to their Intel complier-there is no CodeWarrior for x86. This could be a major hassle for smaller 3rd party developers, but one would presume that Adobe and the Photoshop engineers are equal to the task.

During the Steve Jobs’ WWDC demo, he showed Photoshop CS2 launching and doing a few things running under a developmental version of 10.4.1 running on an Intel chip. By using Rosetta, it launched and ran. This is a good thing, however, it doesn’t speak to the overall functionality of Photoshop CS2 running under Rosetta nor the performance hit it will take because of the emulation. Also not answered are questions regarding Photoshop plug-ins. Will they need to be re-written using XCode? Or can they live under Rosetta?

Under the publicly available developer information regarding the Universal Binary Programming Guidelines available on the Apple Developer Connection web site, some things are clear. Rosetta emulates a G3 processor.

Rosetta does not run the following:
Applications built for Mac OS 8 or 9
Code written specifically for AltiVec
Code that inserts preferences in the System Preferences pane
Applications that require a G4 or G5 processor
Applications that depend on one or more kernel extensions
Kernel extensions
Bundled Java applications or Java applications with JNI libraries that can’t be translated

So, any plug-in that falls under any of the categories above will need to be re-written just to run under Rosetta. Particularly troubling is the “Code written specifically for AltiVec” since AltiVec optimizations are relatively common. Anything written for AltiVec vector-code will need to be revised by either using the “Accelerate framework” or by porting the AltiVec code to Intel instruction set architecture (ISA) such as the MMX™, SSE, SSE2, and SSE3 extensions. The net result is, it’ll take work and it is unknown at this time what sort of performance penalties will be encountered, but if an application or plug-in will run under Rosetta emulation, it’ll be like running on a G3 processor. If an application or plug-in uses any G4 or G5 optimization, such as AltiVec, the app or plug-in won’t run without re-writing.

There are potential architectural differences between code written for PowerPC and x86. The PowerPC and the x86 architectures have some fundamental differences that can prevent code compiled for one architecture from running properly on the other architecture. The extent to which one needs to change one’s PowerPC code so that it runs natively on a Macintosh using an Intel microprocessor depends on how much of one’s code is processor specific.

However, the engineers I talked to have indicated that going from PowerPC > Intel architecture is a lot easier than the other way around. So, that’s a “good thing”. Another point raised is that for software developers, once this change has been made there will be long term benefits in that chipset specific optimizations will now be limited to essentially one chipset-Intel. So, the burden of maintaining both PowerPC and Intel architectural optimizations will be lessoned in the future when the transition by Apple to Intel chips is completed.

The other major question mark is 64 bit processing. So far, neither the Apple nor Intel announcements have included any specific references to 64 bit processors. The available literature only uses the term x86 as a synonym for IA-32 (Intel Architecture 32-bit).

It’s widely expected, but not yet confirmed by Adobe, that at some point, future versions of Photoshop and other Creative Suite products will transition to 64 bit. Even Photoshop CS2 needs Windows XP 64 bit to use more than 2 gigs of ram. The hope and expectation is that in the future, 64 bit processors will help processor and ram intensive applications, such as Photoshop, a lot when those applications are optimized for 64 bit. Until Apple specifically addresses this question, the “Dunno” answer stands.

While Bruce Chizen, Adobe’s CEO, came on stage during Steve Job’s demo and stated support for Apple transition to Intel, he did not address the engineering implications. It’s also unclear how far in advance of the announcement Adobe personnel was disclosed. Indications are that executives, but not engineering staff knew about the announcement in advance. From that, one could deduce that the Adobe engineering teams found out when the rest of the world found out.

However, it’s been my experience, working with the Photoshop engineering team that no engineering challenge is too great. While the road map for future versions of Photoshop has been made murky by Apple’s announcement, if any group of engineers can figure out a way to make lemonade from lemons, it’s the Photoshop engineers.

When I asked several Photoshop engineers about the implications for Photoshop, the “official” party line was that what Bruce Chizen has already said will have to stand for now -that the Intel-transition “is a really smart move on Apple’s part and plan to create future versions of our Creative Suite for Macintosh that support both PowerPC and Intel processors.” Unofficially, they are trying to determine what the best course of action will be for Photoshop and its users. Their track record for maximizing performance for Photoshop is pretty darn good, so on that account, I’m not terribly worried. The aim will be to have the transition be seamless and essentially hidden from Photoshop users.

The net result of the conversations really is that it’s really too early to tell too much. In the long term, unifying behind one chip architecture is probably a good thing. The impact may be greatest on smaller 3rd party developers. But the changes required, depending on the way the original code is written, should generally be less than going from System 9.x to 10.x. In general, for developers who are already cross-platform, the transition will also be easier. The good news is that we’re at least one year and one version of Photoshop away from having any direct day-to-day impact on users.

7 Responses to “Intel Inside: what does it mean for Photoshop?”

  1. Pierre Courtejoie Says:

    Interesting read, Jeff, As always.

    About the porting of altivec optimizations to its MMX/SSE counterparts, Isn’t it already done by apple, thanks to the fact that Photoshop runs already on x86 processors on its Windows version?

    Of course, I’m not a software engineer, but seems to me that the routines already exist. Maybe the CTRL+C, Command+V keys will be worn out in the 10th Floor of the Adobe Building…

  2. Pierre Courtejoie Says:

    D’oh, Of course, It should read: “isn’t is already done by Adobe…”. I’m making the same mistake Bruce Chizen’s mom does (as he referred during the WWDC Keynote)

  3. Jeff Schewe Says:

    As far as Adobe and Photoshop goes, yes, the Photoshop engineers will be well able to handle the tasks and have done so repeatedly through a long list of changes and “transitions” over the years.

    The primary concern really are the smaller 3rd party developers who may not have the experience or the resources that an Adobe would have.

    Many 3rd party Mac developers use CodeWarrior for Mac code for example. Well, Metrowerks sold the Intel dev rights to CodeWarrior and the odds of Metrowerks building new Intel compilers are, well extremely remote. Metrowerks is an independently operating subsidiary of Freescale Semiconductor, the Motorola spin off that WAS making G4 chips. Don’t think Freescale is gonna be too jazzed about building dev tools for a company that is dropping their chips. . .

    So, 3rd party developers will be forced to move to Xcode to do Universal Binaries-do build both PowerPC & Mac Intel compatible applications.

    I will say that PixelGenius has already ordered our “Developer Transition Kit” and our engineer is already reading up on Universal Binaries and has already downloaded Xcode 2.1. With 1 year in advance to prepare, PixelGenius will be in fine shape. We already do cross-platform engineering so in a way, we’ll be better off than some developers without any Intel/Windows experience.

    About all I can say is, remember the old Chinese curse, may you live in interesting times? Well, on Monday, June 6th, 2005, things just got a lot more “interesting” (if you like that sort of thing).

  4. Neil Duffin Says:

    I suspect that most 3rd party developers have written the vast majority of their code in C or C++ (objective C?). Metrowerk’s Codewarrior compiler uses the same standards as any other compiler, such as XCode. There are some specialist actions (pragmas) that are specific to certain compilers, but they usally have equivalents in other compilers (and tend not to be used at all or very sparingly). So, the transition from CodeWarrior to XCode should not be a major issue – rebuild the projects and change a few compiler specific sections. The vast majority of the code is likely to compile exactly the same under XCode as it does under Codewarrior or any other compiler.

    Since the source code (C, C++ or object C) is not generally processor specific (they are general languages) there should not be issue with the move to Intel either. The only place that issue will occur is if the developer has written any assembly language sections (machine code) as this IS processor specific or directly accesses hardware, these will need to be rewritten. Hardware access is very unlikely, this normally only occurs in device drivers in today’s operating systems (OS).

    However, the writing of such machine code is much more difficult and intricate than writing C, etc, and as such I doubt that a major number of 3rd party developers (or any developers for that matter) have huge sections of it. The old adage is that a program spends 90% of it’s time in 10% of its code. It will be that 10% that benefits from being ‘hand coded’ in machine code. I suspect that worst case, that 10% would need to be rewritten – however, as I say, I would suspect most developers have between 0% and 1% in machine code – the compilers these days are very, very good and trying to bet them with hand written code is not normally that easy.

    Another area where there may be an issue is applications which have been Carbonized. I understand that these will need to be converted to Cocoa to work with the new Intel based Macs. Any applications which fall into this category may well require a total rewrite. I have no idea how many applications fall into this area, but anything that is written specifically for OS X is likely not to be. Only more legacy applications will have taken this path?

    Obviously calls to OS X are going to be the same and as with the transition from 68×00 to PowerPC, most of the time will be spent in calls to the OS. In the last transition I think they said that 90% of the time was spent there. The OS code will be rewritten to work natively on the Intel chip – you can’t run the OS through Rosetta and Apple wouldn’t be daft enough to try. Therefore, only the remaining 10% of time will be emulated. There are exceptions though, anything which does heavy maths processing or image processing may well do it in its own code – so it will need to be emulated until it’s recompiled.

    It’s not all bad though. Java apps are not machine specific (which I why Rosetta doesn’t emulate them?) and so will run ‘natively’ on the Intel anyway. Also (and I know I’m going to get flamed for this but stay with me) Intel chips are faster. No, I’m not saying Windows PCs are faster – they 2.7GHz dual Power PC seems to be faster than the high-end PC running Windows. However, Windows is not a very efficient OS. I’ve run BeOS and Linux on a PC box and it’s *MUCH* faster then Windows on the same machine. OS X is a much more efficient OS. I suspect that translating it to an Intel chip will give a significant performance increase, perhaps enough to overcome most if not all of the performance lost from emulation. Particularly if only 10% of the time for most applications will be under emulation (note I say *time* not *code*, 100% of non-Intel applications code will have to be emulated, but if it only spends 10% of it’s time in them and 90% in the OS’s optimised routines, there should be much less of a hit).

    So, what I’m saying is that *most* applications are likely to run and perhaps at pretty much the same speed as now or not hugely slower. Also recompiling most is likely to be reasonably straight forward. There will be exceptions. Some very fast applications using machine code, will require much more work. And these are the ones more likely to use AltiVec code, and therefore not work under the emulator. Plus the OS 8 & 9 ones of course.

    As for specifically G4/G5 code not working – I would have thought that most applications would at least run on G3, otherwise their coders are reducing there market substantially.

    BTW, I am a programmer. :) I’ve also spent the last 3 years writing code (in C and C++) that works across 4 different hardware platforms.

  5. Jeff Schewe Says:

    What Neil says. . .

    :-)

    Nice technical summary. . .it pretty much mirrors what the other engineering types have told me (but with technical explainations that go further than my “simple” explainations).

    The only caution is that some highly optimized processing for digital imaging does indeed have dependancies on things like G4 AltiVec or 64 bit routines in the G5. Those vector based optimizations will not run in Rosetta and must be re-written or at the very least re-compiled to Intel instruction set architecture (ISA) such as the MMX, SSE, SSE2, and SSE3 extensions. Not a major thing, but still a thing ya gotta do.

    Your point about Intel chips being “faster” is also pretty much in line with people’s opinions who write cross-platform and yes, the OS DOES get in the way a lot.

    :-(

    But, in general, the Intel Inside of Apple is neutral to positive for Photoshop (once the work that is required gets done).

  6. Daveed Vandevoorde Says:

    Note that shrink-wrapped software (like Photoshop) typically dynamically tests for the presence of AltiVec (using the Gestalt manager, for example). I’m guessing that Rosetta does not decide a priori whether an executable (or shared library) uses AltiVec instructions. Instead, much like executing on a G3, I’m guessing it will raise an “illegal instruction” signal when attempting to execute such an instruction.

    That would reduce the AltiVec issue to “just” a performance issue in most cases.

  7. Jim Goshorn Says:

    Thanks to Jeff and Neil for the great posts.

    Is the conscensus that we are not likely to see any production PowerBooks or PowerMacs until some time in 2007?

Leave a Reply

You must be logged in to post a comment.