It’s getting dark for Intel

Author’s note: while at first glance this article was far off the mark, the main reason these fumbles on Intel’s part didn’t cost the company more in market share was their uncompetitive tactics to push AMD out of the market by forcing OEMs’ hand. There was no judgement in court against Intel because they settled with AMD for $1.25 billion in 2009, which is a rather clear admission of guilt. Additionally, both the EU and US governments filed antitrust cases against Intel.

September 2000 – Intel’s blackest month in countless years

Following the official withdrawal of the 1.13 GHz PIII in the last days of August, in the wake of this loss of face it also became pretty obvious that Intel will have to ditch the grandiose plans of breathing new life into the dying P6 core with a 200 MHz FSB, a 0.13 micron process, and larger on-die L2 caches. It seems the Coppermine core (the last and most advanced modification of the half a decade old P6 core, introduced in the Pentium Pro 150 MHz in the mid-1990s) simply won’t be able to go much further. While for the AMD Thunderbird it was no big thing to reach 1.1 GHz recently and has still a lot of potential for future clock increasements, the Coppermine can’t be produced in enough quantities even at the 1 GHz level – and the 1.13 GHz version doesn’t even work any more properly.

 

Celeron’s throat cut by Intel’s own incompetence

But it’s not only the higher-end consumer and corporate market where Intel loses foothold every day. In the lower-end range it looks even worse, as virtually all hardware reviewers came to the same conclusion in the last couple of weeks, namely that the new AMD Duron delivers a tough competition for Intel’s Pentium III, for prices lower than even that of a Celeron. What marketing chances does the Celeron still have under these conditions? While this cheap processor was a superb alternative to the PII/PIII at the time when the only non-Intel competition consisted of faulty and slow Socket 7 processors like the AMD K6-x series and the Cyrix II/III, around which the Celeron could run circles, nowadays competition includes a Duron that has about the same lead over the Celeron as the Celeron had over the K6-2.

Intel’s questionable marketing tactics to cripple processors to create different categories – instead of developing different designs – now yields “fruit”: the whole concept threatens to collapse. In the time of the slow and FPU-lacking Socket 7 competition Intel could get away with a CPU offering a too low FSB speed and a too small L2 cache (especially that time to time the Celeron clock speed was close at the heels of the PII/PIII clock speed, something you can’t say for the new Coppermine Celerons any more). But now when the still cheaper low-end competition offers a much stronger FPU, a 4x larger L1 cache and a 3x faster FSB (with an up to 2x faster memory bus), the Celeron’s days are counted.

 

Timna laid to rest

Descending further down the ladder, in the ultimate lowest-end market Intel managed to stumble as well. Their system-on-a-chip design, Timna, was canceled in the last days of September 2000. Unlike the Cyrix Media GX that has been successfully used in different desktop and mobile lines by Compaq, its Intel copy Timna came too late and its being bound to an overpriced memory standard, RDRAM, made it more expensive than a standard Celeron-based PC, with, say, a full-integrated SiS chipset. With the introduction of the Micro ATX and the Flex ATX standards, as well through the spreading of graphics and sound integration into the chipsets, Timna became not only too expensive but also obsolete on all levels, not in the least because of its inflexibility.

 

Servers galore?

Let’s jump to the other end of the spectrum, the highest-end workstation and server area. There it doesn’t look much better neither. The first 64bit Intel processor ever, current codename Itanium, original codename for many years Merced, has been stopped before release. It seems the first 64bit CPU coming out from Intel will be their second generation design, codename McKinley.

No wonder the Itanium died before birth. Not only was it to be a pure 64bit CPU, meaning that all the current 32bit applications would run way slower than on a current 32bit CPU (something AMD’s x86 Hammer design is wisely avoiding by incorporating an additional 32bit core into their 64bit CPUs), it also had problems with yields over 600-700 MHz, and at 800 MHz it ran into basic functionality problems. In the light of the Sun UltraSPARC III that’s going to reach 900 MHz within the next few months, Itanium would’ve never stood a chance.

A good question is how much chance the next generation 64bit Intel CPU will have. By that time AMD’s Hammer might be out as well, and Sun will be on its way to the UltraSPARC IV. It’s just a mighty luck for Intel that Sun will bring out the UltraSPARC V supposedly about two years later than originally planned. Would that CPU arrive on original schedule, it’d debut at the same time as the first – in comparison pitiful – 64bit CPU from Intel.

It’s not the first time Intel can’t deliver a product in the server market. The Xeon was to be the most powerful 32bit CPU line. In summer 1998, just before its release, the Intel press releases were full of hype about Xeon being superior in every way to the Pentium Pro, because it can even do 8-way multiprocessing. The reality was once again a different matter; nearly half a year after the original launch did the first Xeons appear that were capable of 4-way multiprocessing… the Xeons shipped in the first months were namely only dual-SMP capable. And as for the 8-way capable Xeons, they came more than a year too late. The first shipping servers with 8 pieces of 550 MHz Xeons came early 2000, when the Intel Pentium III and the AMD Athlon have already hit the 800 MHz line.

And now, at the beginning of the fourth quarter, 8-way (or for that matter, even 4-way) Xeons are still running only on the AGP-less 450NX chipset at a 100 MHz FSB, with a maximum clock frequency of 700 MHz. Yes, there are PIII Xeons that are running on a 133 MHz FSB (with RDRAM) on AGP-enabled chipsets, with clock speeds of up to 1 GHz – but only for 2-way SMP. Definitely not what a server with a heavy workload needs.

 

Pentium 4 – savior or the last failure?

Intel’s last hope to re-conquer the CPU market lies in their brand new Pentium 4 with its IA32 architecture, the first new 32bit CPU design since the Pentium Pro 150 MHz back in the mist of ages. But some aspects of the P4’s design might be pre-programmed for failure. Most prominent of them all is a too deep pipelining which makes it easier to increase clock speeds – but on the other hand, it causes a large performance hit compared to less deep pipelined CPUs at a clock-for-clock comparison. This design might have been invented at a time when Intel was the undisputed processor king, and in lack of competition, the only measure after buyers could go was the clock frequency of the CPU. That’s why Intel’s first conception was to stop the Pentium III at 1 GHz and start the Pentium 4 at 1.5 GHz, to cover the fact that clock-for-clock the Pentium III is easily 20-30% faster than its successor.

But now, with the AMD Athlon scaling nearly as easily and already beating the Pentium III at similar clock speeds, this deception of Intel will backfire. Instead of people getting amazed how fast the 1.5, 1.6, 1.7 GHz P4s will be introduced one after the other, they’ll read the reviews on the hardware sites and see that the Athlon 1.5, 1.6, 1.7 GHz CPUs – released most probably not much later than their P4 counterparts – perform much better and cost a lot less.

The last hope of Intel with the Pentium 4 is its quad-pumped (4×100 MHz) Front Side Bus, resulting in a theoretically twice as high bandwidth as that of current Athlon systems (200 MHz). They want to achieve it with using RDRAM memory, but with current PC800 RDRAM design, they’d need dual channel RDRAM to match the 400 MHz FSB. And it means not only insanely high costs for memory, but also chipset and mainboard costs significantly higher than that of a current SDRAM based (or later on, a DDR SDRAM based) Athlon mainboard. And in case they use single channel RDRAM, the memory bandwidth of Pentium 4 systems will be lower than that of the upcoming new Athlon platforms with PC266 DDR SDRAM, adding increased latencies as well. So Intel is either using dual channel RDRAM and has such high system costs that the Pentium 4 will have no chance in the consumer market and will be forced into the workstation segment only, or they will use single channel SDRAM and will not only have somewhat higher system costs than for a DDR-equipped Athlon system, but also lower performance at all levels.

And the most inexplicable design failure of the Pentium 4 is not even chipset/memory related. This brand new architecture has a considerably weaker FPU than its predecessor, the Pentium III/Celeron. This “glitch” might be the ultimate downfall of this CPU at a time when the increased clock frequencies are needed mostly for complex calculations like 3D modeling and 3D games. Nobody really needs an increased integer performance and hasn’t for years. That was one of the reasons why the Super 7 processors could never really break Intel’s hegemony. Although an AMD K6-2 400 MHz offered more than double the integer performance of an Intel Pentium 200 MHz, at a 2x higher clock frequency it still had a lower FPU performance than the older Intel competitor. That meant a significant performance loss in number crunching like 3D games, and no mentionable performance increase in standard apps, where chipset and memory have a much larger impact on performance than the processor speed.

 

Chipsets – from the top to the bottom in just over a year

The Pentium 4 has already a lot of troubles on its own as a processor, but now it’s been further delayed by serious failures in the only available P4 chipset, the i850 (codename Tehama). It is but one of a whole series of chipset fiascos in the last 12-18 months.

Up to early 1999, Intel has been known for many years as the manufacturer who made the fastest and most stable chipsets on the PC market. In the classic Socket 7 area, even people who opted for an AMD K5/K6 CPU mostly chose an Intel chipset based mainboard as companion. No serious hobby system builder wanted to play around with VIA’s or ALi’s notoriously low-quality and incompatible chipsets, they mostly ended up in lowest-quality discount PCs.

This situation has been rapidly changing since about mid-1999. Although VIA chipsets are still a far shot away from being as stable as an Intel LX or BX has been, AMD succeeded in releasing a chipset last Fall that could go toe to toe with the BX in stability. And the biggest impact on the shifting around had Intel itself. Starting with the questionable move of bullying mainboard manufacturers into buying an obsolete would-be 3D accelerator (i740) in bundle if they want to receive enough chipsets, they gave up on this tactics later on and integrated the same (in the meantime even more obsolete) graphics core into a new chipset (i810) and eliminated the possibility of an external AGP slot – thus started the trip down to the bottom for Intel chipsets.

A very embarrassing fiasco was the i820/Camino, where the first generation of mainboards had to be scrapped and the second generation recalled, both because of basic stability problems. Now it continues with i850, which should have been in mass production by now and has been stopped because of heavy performance problems with 3D graphics.

 

The last rat remaining on the sinking ship: Rambus

Most of the problems with the i820 chipset as well as with the future market acceptance of the Pentium 4 originate from Intel’s alliance with Rambus. They wanted to rule the DRAM market, but lost their bid for supremacy. As the market acceptance of RDRAM sinks every month, Rambus has focused its activities from any kind of productive work to suing everyone in sight, trying to spread fear among the DRAM and chipset makers.  It’s amazing how in modern business life such a childish behaviour is accepted at all (on second thought, if I think of what Microsoft did in the last decade to achieve what they achieved, such tolerance of Rambus’ practices isn’t that surprising any more).

 

Afraid of the dark

With the approaching of winter, it’s getting dark and cold, even in sunny California. Especially dark and cold for Intel who has been enjoying a lazy sunbath for far too long.

No wonder at the last Intel Developer Forum the main accent was not on the processors but on the networking products – for the first time ever. Are we going to lose a major bully from the CPU boxing ring?

Black target

Monopolies hunting down the “modern Robin Hoods”

I encountered this article on The Register back in February, 2000. To summarize it, Microsoft has been demanding the imprisonment of Mohamed Suleiman, managing director of a Kenyan PC OEM called Microskills. Why? Because his company has been loading pirate copies of MS products on PCs it shipped. MS demanded besides the imprisonment – of course – $$$, in this case some half a million USD.

In Kenya, 9 out of 10 software packages are pirated (BSA estimation). Why? Not only do people have a ten times lower income as in Western countries, the software itself costs more in absolute terms than in the US. Would someone demand $700 from you for an MS Office 97 Standard, you’d probably hold off on purchasing the software. Add the factor 10 for comparison to see what this price really means for the average Kenyan, and ask yourself if you’d ever pay US$ 7’000 for any office package. Especially if the average computer (once again taking the same factor 10) would make your purse lighter by some US$ 20’000.

In my opinion, not the Kenyans are the unlawful pirates but the company pushing such prices upon them is the bandit itself. While the average Kenyan software pirate might break the precious Western laws (made by Western people for Western people), the Western company robbing the already poor people blind is breaking the greater laws of ethics.

 

Robbing the robbed

For me, this ruthless business practice of an already rich company taken against an essentially poor country (made poor in the first place by the same Western society) is simply unacceptable.

Visualize following: there’s a fat man sitting at a table, stuffing himself like an animal from half a dozen large bowls simultaneously, shoveling the food into his mouth with both hands like an excavator. Then suddenly a small piece of food is falling out of one of the bowls, upon the floor. A skeletonlike man who has felt Famine’s hand for too long happens to come by, picks up the food and eats it. The fat man is jumping up from the table, kicks the starving man’s head several times and shouts for the police. You pass the scene at this moment, what would you do? Especially if you consider that there has been a lot of well-fed men dropping by earlier on and taking out large pieces of food from the bowls on the table?

Because software piracy is something that occurs more often in Western society, if you take the absolute numbers instead of misleading percents. Even if 90% of all users have a pirate copy in Kenya, their total number is but a small percentage of the number of users owning a pirate copy in any single Western European country, or, for that case, in the US. And piracy in the Western countries isn’t just done by end users. No, big OEMs have been doing it for years as well.

 

What about the real culprits?

I bought a PC back in 1994 from one of the largest Western European manufacturers and it came with three Microsoft software packages. Although there have been certificates, there weren’t any manuals nor any media included. I first contacted the reseller but they couldn’t help me at all as they got the PCs from the manufacturer with the programs preinstalled. I then contacted the local MS office. Do you know what Microsoft Switzerland told me? They can’t help me and it doesn’t concern them at all! The same reckless company who is hunting down “pirates” in the already stripped-to-the-bone third world, doesn’t care about the same piracy if it’s done in one of the wealthiest countries of this planet!

I can’t help but fully symphathize with the decision of the Nairobi Commercial Court, whose extraordinary dismissal decision put MS on the losing side this time. Suleiman called the ruling “a victory for morality against the unethical mercenary tendencies of a greedy multinational that thrives on bullying tactics.”

Just this one time, the story had a happy end. But it was sadly the exception, not the rule.

Saving the honor of the G400/G450

Author’s note: My actual first-hand experiences are based upon a Matrox G400 Max and a GeForce 2 GTS. Although they are not the G450 and the GeForce 2 MX, according to the specifications the scales are even more in favor of the G450 against the MX as in the case of the G400 and the GF2 GTS. That means, if anything, the situation is even more extreme than sketched here.

You can see the Matrox G450 specs here and the nVidia GeForce 2 MX specs here.

 

Matrox dead?

Back in July, when I first saw the technical specifications of the new GeForce 2 MX chip on nVidia’s website, I said to myself: “Matrox is dead. The TwinView of the GF2 MX can do something the G400’s DualHead can’t, namely sending digital output to 2 LCD monitors at the same time. And the nVidia chip’s 3D is way more powerful as well”.

After this first reaction, sobering came quickly. I guess there’s simply no limit as to how many times someone can fall to hype – even in the case of a seasoned unbeliever like me. The amount of false info in the case of the MX chip was amazing. Within two months, it became clear for me that taking away the hype, the MX is no match for the G400/450 if you’re after something more than raw fps in Q3A.

 

TvinView better than DualHead? Not exactly.

Let’s begin with the fact that the card has been shipping for many weeks with drivers where TwinView didn’t function at all. Reminds me of the S3/Diamond Viper II which was also hyped as a 3D card with integrated GPU (T&L) but from release on, for several months all drivers had the T&L disabled… sounds to me like buying a 2 MB version of the Xeon CPU for an insane price tag and the L2 cache is disabled.

Even with the new drivers where TwinView is enabled, it’s still a long shot from DualHead. While the DualHead of the G450 offers you a full-featured secondary display, the TvinView of the GeForce 2 MX limits the secondary monitor output to 85 Hz, thus a) crippling the possibilities of high-end monitors, b) limiting the resolution choices on older/cheaper monitors severely and c) making the whole hype about double LCD monitors questionable, as there is maybe 1 in 10 LCD monitors capable of 85 Hz.

And not enough with that: when you use TV out on the secondary connector, your primary VGA output is limited to a mere 800×600 – whereas in the case of the Matrox G400/450 DVD Max, you can use up to 2048×1536 on the primary monitor and watch the DVD movie fullscreen on the secondary output (be it a monitor or a TV).

Another thing that might sound unimportant at first is that the TvinView of the MX works only with the newest Detonator 3 drivers (see further below for what that really means).

Update November 2000: For a couple of weeks now, there’ve been GeForce drivers out that solve most of the probs you can read about on this page. But that doesn’t change the fact that these problems existed for nearly 2 months, which is a pretty long time in the world of 3D graphics.

 

3D performance crown? The Detonator 3 fiasco.

According to marketing hype, the GeForce 2 MX should sweep the floor with all low-end and middle-range 3D competition, including the whole Matrox product range. Real-world experiences show something else however.

The Detonator 3 drivers came out around mid-August and promised a 50% performance increase. While in games like Q3A and Unreal Tournament there was indeed a 25%-30% (certainly not 50%) performance increase, a lot of other games experienced much higher percentages of performance decrease. Games like Outlaws (which already ran fine on a 3Dfx Voodoo1) started to get choppy, Unreal lost about 15% of its speed and Requiem: Avenging Angel was now unplayably choppy in 1024×768 without FSAA whereas with previous drivers it was perfectly fluent in 1024×768 with FSAA (which requires more than 2x more 3D power). Not to mention severe visual errors in Q3A itself (see next chapter).

I can’t help but ask: what kind of drivers did they give us? Just checked today and the Detonator 3 v. 6.18 drivers are still the latest ones since mid-August. And that means these drivers that are uncapable of running a lot of games out there are the only ones you can use for your GeForce 2 MX unless you are ready to ditch all the TwinView capabilities… here’s your 2 MB Xeon with the disabled L2 cache.

And in case you didn’t realize, that means that a big chunk of the 3D games out there currently runs better on a G400/450 than on a member of the GeForce family (provided you use the latest drivers). I can’t believe nVidia seems to get away with it.

 

Reviewers seem to be playing the same two games all the time

But they do. Haven’t seen a 3D card review where they’d have put more than 2-3 games on test. And that’s misleading. While the Detonator 3 drivers run Q3A just fine (except for those visual defects) and they are superb for playing Unreal Tournament as well, Drakan will get choppy already at low resolutions (whereas a G400 gets it up to 1280×960 just fine), and when using FSAA, I can’t save Half-Life without the game crashing. Add the games mentioned in the previous chapter and you have a large percentage of the bestseller games from these last couple of years – and it looks pretty obvious the list could be longer, would someone take the time and methodically stress-test dozens of games for basic functionality.

This is how the visual defects with the new nVidia drivers look like in Q3A. Those artifacts are moving and flickering as well.
This is how the visual defects with the new nVidia drivers look like in Q3A.
Those artifacts are moving and flickering as well.

 

2D image quality: no surprises here

I’ve been using Matrox cards exclusively on my PCs for the last 4 years and so I didn’t realize until a short time ago when I bought my Elsa Gladiac (GeForce 2 GTS), how strong 2D image quality of different cards can vary. For your information, I’ve used for all tests the same 21″ SONY F500R monitor with the highest quality cables. I used the same resolutions and color depth for comparisons, with refresh rates between 100 and 160 Hz.

When I first installed the Gladiac, my very first impresson was how impossibly blurry the image was. Indeed, it didn’t reach the sharpness of my 3-years-old Matrox Mystique 220 – which is not even directly connected to the monitor but to an Orchid Righteous (3Dfx Voodoo) with a loopthrough cable. And the G400 (already the G200) is a long way over that quality level. It might not be of concern when playing Q3A, but it’s certainly one when surfing the web or typing a letter (or doing something even more productive like DTP or imaging). If I had to limit myself to one PC with one graphics card, I’d choose the poorer 3D speed because the other alternative is to damage my eyesight.

 

3D image quality: now there’s a surprise

And that’s not all. While most reviewers seem to assume that e.g. 640x480x32 is just the same-looking on any and all 3D cards and thus the one delivering the highest frame rates at this setting is necessarily the best, I happened to play Q3A for weeks on a Matrox G400 before acquiring a GeForce 2 GTS to get higher frame rates. At the very first moment when starting up the game with the GeForce 2 GTS, I was stunned by how ugly the game suddenly looked – with the same settings as before.

The main reason for this is the faulty S3TC implementation of nVidia in the GeForce drivers which makes the game a lot less fun to play. As you can see here, using S3TC doesn’t necessarily lower quality on a remarkable scale (it’s quite good-looking on an S3 Savage card). But nVidia took a “shortcut” (most probably to increase the frame rate and thus be able to claim to have the fastest 3D card out there). Indeed, the 3D image quality of a GeForce 2 with all quality settings of Q3A set to maximum and S3TC enabled is worse than that of a G400 with 16bit rendering, 16bit textures and bilinear filtering (instead of trilinear). Click on these 4 thumbnails to see what I mean.

thumb_Q3A_NVIDIA_S3TC_640
640x480x32 highest quality settings, GeForce 2 with S3TC enabled
thumb_Q3A_MATROX_640
640x480x16 bilinear filtering and 16bit textures, G400
1024x768x32 highest quality settings, GeForce 2 with S3TC enabled
1024x768x16 bilinear filtering and 16bit textures, G400
1024x768x16 bilinear filtering and 16bit textures, G400

Performance comparison – this time take your eyes instead of the specs on paper

As most (maybe all) benchmarkers leave S3TC enabled when benchmarking GeForce family cards, indeed some of them openly stated they find it worth the lower image quality (S3TC gives you some 20% performance boost already at 1024×768), I decided that I’ll take the same freedom and compare the G400 using a similarly reduced image quality – hence the 16bit rendering and texture quality, as well as the bilinear filtering (see screenshots in previous paragraph).

As I had no GeForce MX at hand, I benchmarked a GeForce 2 GTS and then reduced the fps results by the same percent rate with which the MX against a GeForce DDR scored in an MX review posted on Tom’s Hardware Guide. As the GeForce 2 GTS is somewhat faster than the GeForce DDR, and the G400 is at a similar percentage faster than the G450 (because of the different memory interface), I’m convinced these results are basically true for any G450/MX comparison.

3D performance comparison chartAs you see, by comparing different cards at the same 3D image quality level, the GeForce 2 MX doesn’t look that cool any more. By 69:68 and 44:46 fps, I’d call it even. and at that, already for this equality of relative performance the MX needed specialized drivers that won’t work with a lot of games out there. Go back to the Detonator 2 drivers to be able to play all games on the market and you’ll lose the TwinView capabilities and get some 20% lower fps results, putting the GeForce 2 MX behind the G450 in 3D speed – provided you benchmark the cards head-to-head with setting the graphics quality and not the paper specs equal as comparison base.

 

Conclusion: nothing is black & white

These results don’t mean however that the G400/G450 is a good 3D gaming solution at all. I just wanted to show you that a cheapo card from nVidia isn’t necessarily better in any way, just because it carries the word “GeForce” in its name. While a GeForce 2 GTS is definitely one of the very fastest 3D cards on the market now, the GeForce 2 MX is a severely crippled version. It’s limited by its way lower memory bandwidth – just like the G400/G450. I have the feeling would Matrox equip a model of the G400 series with 128bit 150 or 166 MHz DDR SDRAM (like in the case of the GeForce DDR and the GeForce 2 GTS), it would score if not right up with the DDR-equipped GeForce cards, at least high enough to be able to beat a GeForce SDR or a GeForce 2 MX any time.

Of course this is only speculation. What’s fact however is that the G400/G450 can stand its own against the only dual-output competition, the GeForce 2 MX. While not even the 3D speed of the MX is convincing enough, its image quality, partial game incompatibility (or no dual output at all) and only half-baked TwinView features make it no real choice for anything else than pure 3D gaming. And in that case, I’d rather opt for a TNT2 Pro for less money and more balanced fps across the games, or for a GeForce 2 GTS (Ultra) with its insane 3D power.

No more math in the IT business?

Has someone ever offered you a computer for sale with 192 MB RAM? And with 196? Yes, I’m talking about the same computer. Back in late 1998, one day I got a laugh fit when I saw a guy writing in a tech support newsgroup and stating his computer’s specs, among them the main memory size: 196 MB SDRAM. Considering that the smallest SDRAM module ever produced is 8 MB, this sounds like a real wonder – but in reality, it’s just lack of knowledge. He forgot (or didn’t know) the difference between 1000 and 1024, the first used in our everyday society, the second being the standard in the hardware world.

 

Why that strange 1024?

Computers (like all electronics) are based upon a binary system. A binary system has only 0 and 1 which correspond to eletrical current flowing or not flowing. That’s all the information that the smallest unit, called “bit”, can hold. There have been several different bit combinations throughout the early years, but the one surviving into today is the one called “byte” or “long word”, consisting of 8 bits. While a bit can hold only 2 different kinds of information, a byte offers 256 distinct possibilities.

(As a side note, the 7bit combination called “half word”, with a max number of 128 different kinds of informaton, survived in one IT-related branche, MIDI programming.)

OK so what does it have to do with your kilo- and megabytes? The computer is “thinking” in the binary system. On the other hand, this system is rather difficult to use for human beings. A “near-overlapping” of the two systems occur at around 1000: in the human-used decimal system, 10 raised to the power of 3 is 1000, a number we’re all familiar with. In the binary system, 2 raised to the power of 10 is 1024, which is fairly close to the easily understandable 1000. To make it clear why they chose this specific exponent of 2, the scientists even called it “kilo” (after the Latin word for thousand).

To come back to my first example, that guy had 192 MB RAM which is 192×1024 = 196’608 KB. That’s the number he could see when switching on his computer. Without the knowledge  tought in high school (in some countries, elementary school) and explained in detail in the previous paragraph, he could only figure out that he had 196-and-something-thousand KB, which should be 196-and-something MB. The something didn’t make sense to him, so he decided he simply had 196 MB. Crazy, isn’t it? Happens every day. And not only to computer illiterates…

 

Nothing new

My first personal encounter with this trend was using 100 MB ZIP disks (but I assume it started far earlier). I was in for a big surprise about three years ago when I wanted to push 98 MB to a 100 MB ZIP disk and after a quarter hour of bitshoveling Windows told me there isn’t enough space on the target drive. Guess why? Iomega was one of the first companies to “introduce the new counting method” of 1000 KB being a MB.

Hard drive manufacturers quickly followed suit, introducing the new conversion rate of 1000 MB being a GB. You could “enjoy” the “advantages” when buying a shiny new 10.1 GB hard drive and getting up to half a giga less for real.

 

Don’t expect too much brains from the “suits”

In some branches of the computer industry, the binary system still holds its own: we’re speaking about 64 and 128 bit memory buses instead of 50, 100 and 150 bit; you can also buy only 32, 64, 128 or 256 MB memory modules but not 50, 150 and 250 MB ones. Furthermore, 128 MB means 131’072 KB and not 128’000 KB.

But as computer business got more and more of a money affair instead of a scientific one, IT has been overflown with managers who know how to dress after the latest fashion and how to talk smoothly but couldn’t add 2 and 2 without a calculator – and it seems there are some things they can’t figure out, not even with the help of the latest microelectronics…

A quick example: Maybe you’ve also seen somewhere an offer with a 56’000 bps modem. Most probably, you automatically recognized it’s a 56K modem. But now with the fresh (or freshened up) knowledge of not 1000 but 1024 bit being a kilobit, you should wonder how it adds up. The truth is, those modems have a max data rate of 57’600 bps, which is exactly 56.25K.

 

It’s all in the changing of times

While a decade ago, computer business was more about geniuses developing new technologies no one would’ve thought of before, nowadays it’s rather who’s selling more of a new product and it doesn’t really matter any more whether a product is good (or working at all, how many Windows crashes did you have this week?), but who has the better marketing campaign. What do you think why are there so many “Ultra” products again and again?

These folks pushing the products in your face nowadays have most of the time less idea about computing than the average customer. They didn’t get the job to understand what they’re selling in the highest possible numbers, but to know how to promote something and are feeling themselves at home in the sharky waters of hardcore business (where most of the time there is no place for scientists any more).

No wonder you see daily product offerings like a 128bit sound card (too bad the SoundBlaster PCI 128 got its name from the max number of MIDI tones it can produce simultaneously, it’s still a standard 32bit PCI card), a 350 MHz graphics card (don’t try to tell them the important things about a 3D card are chip and memory clock, RAMDAC doesn’t matter much), or a 700 MHz Pentium processor (I’ve encountered several high-standing product managers in the computer business for whom there were two CPUs: the Pentium and the Celeron. That architecturally seen, the PII stood nearer to the Celeron than to the PIII, is wasted info on them).

 

Marketing guys aren’t the only ones unable to count

But there are some similarly horrible calculations I’ve seen from all kinds of tech geeks as well… like the Pentium II 330 MHz. Intel announced a 333 MHz part but while this time their marketing dept did the homework, the resellers “new it better” and decided that a CPU with a 66 MHz FSB and a 5x multiplier is 66×5=330 MHz.

What’s wrong with that? There are no 33 and 66 MHz buses. They are 33 1/3 (33.33) and 66 2/3 (66.66) MHz. And if you take 33 1/3 x 5, you get 333 1/3 (333.33) MHz, which rounded after mathematical rules is exactly the 333 MHz Intel announced. Actually, using the same rules they should’ve been calling their 166, 266 etc. CPUs 167, 267 etc. but that was somehow already too complicated for them, I guess. Or maybe the word “rounding” meant to them cutting away everything after the decimal point. Or maybe they thought people can recognize and remember 266 better than 267.

They got it right first with the 667 MHz PIII, although I’m pretty convinced they didn’t do it on behalf of mathematics but rather because it was considered to be too risky to release a 666 MHz CPU in a post-Christian Western society…

 

Will it never end?

I wonder when they will start selling the 128 MB RAM stick as a 131 MB memory module. Or maybe we are not yet stupid enough for that? You can do your part for a less idiotic IT world yourself by developing some criticism towards those mindless ads and their even more mindless creators.

Dissecting the worm

Sneaking in

On Thursday, May 4, 2000, a new worm has hit the net. It arrived in an e-mail with the subject “ILOVEYOU”. Within hours, mail servers were paralyzed, whole networks of PCs crashed (including those of many an administrative authority or bank).

The mail itself told you to open the attachment containing a love letter. The attachment’s name was “LOVE-LETTER-FOR-YOU.TXT.vbs”. Once opened, the mayhem started by changing your system settings, then sending the worm to new victims and finally killing off your most beloved image and music files.

Read moreDissecting the worm