This is one of those questions that may never be answered. There was no worldwide disaster at the turn of the millennium. In fact, there were no reported cases of a Y2K-caused system failure anywhere. So was the entire issue a mere fear-mongering hoax perpetrated by the IT industry to make money? Or did the IT industry do such a good job of identifying the threat early and fixing it that they deserve our collective thanks and appreciation?
Advocates of the former view (that it was a hoax) have had the benefit of the doubt all along, but yesterday's events lend strength to the opposite argument - that we escaped by the skin of our teeth on 1st January 2000 only thanks to alertness and good crisis management.
I just heard from a friend of mine that his company suffered a major outage of its entire phone system yesterday. The time the problem started gives a clue as to what the issue was.
When the phones went dead in their Sydney office, it was 10:59 am, and the date was 29 Feb. If that doesn't signify anything, think about what time it was then in Greenwich.
11:59 pm, 28 Feb.
In a leap year.
That's what they apparently found out during the investigation. A firmware bug in the phone system's processors couldn't handle the date change on the leap year correctly, leading to a crash.
OK, so it was a relatively isolated bug restricted to one system (the phone network). Other computer systems reportedly chugged along without missing a beat. They had been designed to handle leap years because leap years are a well-known and frequent event.
Y2K was neither. It started as an obscure issue with no public awareness, and represented (literally) a once-in-a-lifetime event. How many systems would have been designed for it? Even farsighted designers were up against the high cost of computing in the sixties, seventies and even eighties. Saving 2 bytes with every date stored meant a lot of money, and therefore there is a real argument to be made that the huge sums of money that were spent on fixing the Y2K problem were worth it because of the larger sums of money that the design shortcut saved them in 60s, 70s and 80s dollars.
I have always believed (perhaps because I'm in the IT industry) that Y2K was a real problem, and that it was effectively fixed. Perhaps it was fixed too well. Perhaps there should have been some systems that were neglected and then allowed to fail on 1 Jan 2000, just to prove to the skeptics that there was a serious problem.
After all, if everyone in town takes a flu shot, and no one gets the flu that winter, is the flu shot a hoax, or did it just work as it was designed to?
Advocates of the former view (that it was a hoax) have had the benefit of the doubt all along, but yesterday's events lend strength to the opposite argument - that we escaped by the skin of our teeth on 1st January 2000 only thanks to alertness and good crisis management.
I just heard from a friend of mine that his company suffered a major outage of its entire phone system yesterday. The time the problem started gives a clue as to what the issue was.
When the phones went dead in their Sydney office, it was 10:59 am, and the date was 29 Feb. If that doesn't signify anything, think about what time it was then in Greenwich.
11:59 pm, 28 Feb.
In a leap year.
That's what they apparently found out during the investigation. A firmware bug in the phone system's processors couldn't handle the date change on the leap year correctly, leading to a crash.
OK, so it was a relatively isolated bug restricted to one system (the phone network). Other computer systems reportedly chugged along without missing a beat. They had been designed to handle leap years because leap years are a well-known and frequent event.
Y2K was neither. It started as an obscure issue with no public awareness, and represented (literally) a once-in-a-lifetime event. How many systems would have been designed for it? Even farsighted designers were up against the high cost of computing in the sixties, seventies and even eighties. Saving 2 bytes with every date stored meant a lot of money, and therefore there is a real argument to be made that the huge sums of money that were spent on fixing the Y2K problem were worth it because of the larger sums of money that the design shortcut saved them in 60s, 70s and 80s dollars.
I have always believed (perhaps because I'm in the IT industry) that Y2K was a real problem, and that it was effectively fixed. Perhaps it was fixed too well. Perhaps there should have been some systems that were neglected and then allowed to fail on 1 Jan 2000, just to prove to the skeptics that there was a serious problem.
After all, if everyone in town takes a flu shot, and no one gets the flu that winter, is the flu shot a hoax, or did it just work as it was designed to?