Comments on: Apple posts Photoshop CS Benchmarks The latest news about the top pixel wrangling application on the planet. Sat, 25 May 2013 08:41:56 +0000 hourly 1 By: Dan Neeley Dan Neeley Sat, 30 Apr 2005 14:46:45 +0000 No disclosure of testing methods + no solid numbers = worthless PR hyperbole. At least the graphs are pretty.

Considering the clammoring on some forums for a way to test Photoshop performance perhaps if this gets enough traction it will inspire someone to create a openly documented means of testing systems. One could always hope…

By: Rick Bloom Rick Bloom Sat, 30 Apr 2005 14:12:50 +0000 Thanks for your clarifications, Jeff. Living and working in Washington, D.C. I’m constantly exposed to people on both sides of an issue making a lot of noise over very little substance. Marketing and public relations have much in common. Apple does an amazing job given their total market share disadvantage, and probably is entitled to a little hyperbole from time to time.

By: Jeff Schewe Jeff Schewe Sat, 30 Apr 2005 04:50:48 +0000 Cathy, since Win XP 64 only started shipping this last Monday, and to the best of my knowledge was not available except in beta form, the 64 bit hardware avalable would probably have been running as Win 32 bit.

Also note that the tests were done with Photoshop CS, not CS2. The tests also didn’t use any dual-core pc chip sets.

The test were what the tests were. I think the main thrust was to compare the G5s to Pentium 4 and dual Xeons & Opterons. . .

By: Jeff Schewe Jeff Schewe Sat, 30 Apr 2005 04:37:39 +0000 It should be noted that Mr. Galbraith declined to disclose all of the exact Photoshop functionality called either and that some of the times quoted by Mr. Galbraith for some specific Macintosh results were far longer than on other people’s Mac G5′s. (mine in particular)

It should also be noted that the side by side test by Mr. Galbraith did not factor price into the equation. The G5 in question actually cost a bit less than the PC (by between $300-$400 depending on what one meant by equal configs).

Also note that Apple’s benchmarks posted were dual machines vs dual machines.

Benchmarks are benchmarks and like all statistitcs are subject to influence. You would be welcome to do your own suite of tests on machines and post them here.

By: Rick Bloom Rick Bloom Sat, 30 Apr 2005 00:05:35 +0000 In the never-ending, mind numbing leapfrogging of hardware and operating systems, Apple has spewed forth yet more meaningless statistics. No doubt they (as have their PC counterparts in the past) have again cherry picked statistics to favor their particular setup. What distinguishs this latest effort, is the non-reproducibility of results due to lack of specificity. In doing this, they have taken a page from the current White House publicity machine which is long on talk and short on walk. If their results were reproducible and unassailable, you can bet there would be more details given. I’m sure these new machines are extremely powerful but please, can the hype. I don’t remember Apple owning up to the independent (Rob Galbraith’s site) assessment that a single processor P4 machine whupped their dual G5 not all that long ago. I’m sure things will remain pretty much as they are now: Macs are better designed and have a superior user interface, and PCs give you more power for less money.

By: Cathy Brown Cathy Brown Fri, 29 Apr 2005 23:49:53 +0000 Do I understand this correctly? The Intel benchmarks are for dual processor boxes, but the Athlon 64 aren’t? What gives?

By: FiveseveN FiveseveN Fri, 29 Apr 2005 11:12:14 +0000 I believe a benchmark script can be distributed through .NET framework or simple Photoshop actions. Indeed, it would be nice to be able to compare your own workstation’s power to the marks released by Apple.

By: Pierre Courtejoie Pierre Courtejoie Fri, 29 Apr 2005 09:14:06 +0000 I’d have written “Apple posts undocumented Photoshop CS benchmarks”

In my opinion, it would be much more useful to know the exact suite of steps that their benchmarking action/script uses, in order to:

1) Be able to replicate their tests, and compare our computers with the best current models. (Further down the road, it might help to find issues in a given computer configuration)

2) Assess their validity: Imagine that the before last step of their test is a resize to 10.000×10.000 pixels, then followed by a filter that is rarely used in real world (for instance, extrude), but that would be heavily optimized for a given platform compared the other ones, (I don’t know if it is the case for the example I gave), this single step might overweight other “real world” tests, if the results of the differents test are not factorized, but just added together.

Once again, this situation highlights the needs for an open source and real world benchmark, with different “cases”: Photography, with raw editing, batch renaming, adjusting, contact sheet creation, etc.
Web design: pattern creation, slicing, etc.
Graphic design: selections, blurring, application of styles; conversion to CYMK, export to secure PDF, etc.

BTW, my point is not to contest the performance claims of these tests, but rather, to point out their non-replicability, and the impossibility to assess their real impact in one’s day to day work.