RIPENING SEASONS

Issue #28, October 1998


Time waits for no one (as if you didn't know)

Well, I had other things to write about, but it seems that I walked into a pre-millennial quagmire with my eyes wide open, trying to confront a looming problem with consequences that no one can be sure of, but which are not likely to be insignificant.

All of you will recall my concern in the last issue, expressed in a tailgate box putting it as "fair warning" of a likely run on the banks next year, as people react to the uncertainties inherent in the so-called Y2K bug -- the computer glitch that may disrupt any number of systems. I saw this whole thing, even so recently, as essentially a personal survival concern.

Not long thereafter, those of you for whom I have an email address received a more elaborated and urgent message from me on my realization that there may be much more social fallout to this thing than I had at first considered. From my fresh perspective it seemed reasonably clear, and I thought I could easily explain my reasoning. But it seems I was wrong. Not about what may or may not happen (for nobody really knows), but about the ease of putting across the need to be concerned for what might happen.

The scope of hazard, it seems, is potentially so immense -- as foreseen, at least, by those already having to deal with it: systems managers, disaster planners, and some of the more alert and vitally vulnerable public agencies -- that many of shallower nerve, and perhaps less responsibility, have taken to backing away from it, saying that it can't possibly do the damage predicted, for a variety of reasons ranging from faulty premises to "they just would not let that happen" ("they" being any imagined set of capable hands on the wheel, from computer technicians to the capitalist powers that be), to pinning the label of apocalyptic fanatic on those trying to spread the warning. A postmodern version of 'kill the messenger.'

It results, of course, in a secondary and more immediate problem: how to get through these barriers of denial with sufficient credibility, and in sufficient time to get a level of preparation underway that can cope with what can only be known as an approaching moment of chaotic potential -- nothing more specific, except as to timing, but one that will foreseeably leave no one in this society untouched.

Until the moment arrives, you will have to make your own judgement as to how worrisome that chaotic potential really is. And rest assured, you will make some judgement about it, because media is going to be pounding you with the development of Y2K-related events as the moment draws near. [The "moment," by the way, has a bandwidth that might extend as far as a year to either side of January 1, 2000, depending on the system particulars involved. Early bug-fallout has already been documented! But the first few weeks of year 2000 is the most critical span, when things will pile on top of one another.]

The task I take up, with this issue of Ripening Seasons, is to bring you up to date on what is known, thus far, about the problem. But be careful, as we get into it, for there are several different levels of the problem, each deserving of discussion:

1. the technical computer problem;
2. the interlocking systems problem;
3. the resultant and expectable social problem;
4. as already noted, the denial problem.

There is a 5th problem, possibly the most critical, certainly the most awkward. It is technically a subset of problem #1, but better to treat as a 5th, not leastwise for its '5th column' resonance: the microchip problem, also referred to as the "embedded bug," for the difficulty of getting at it.

For the sake of appreciating how suddenly new is this fifth concern, I will be interspersing my own text with direct quotes from a fairly recent article by techno-authority James Gleick, the author of a 1987 book with the rather interesting title (in the circumstances): Chaos. These extracts, however, come from a 1995 omnibus volume called Our Times: The Illustrated History of the 20th Century -- a 713-page display-table book, in which Gleick's article, a 3-pager titled "Information Overload: the Electronic Revolution," is one of only ten accorded similar prominence. The purpose of quoting it here, aside from the backgrounding and the ironies it offers, is to point up the fact that as recently as 1995, there was no mention of any concern about the microchip Y2K factor.

"By the mid-1980s a peculiar form of statistic had already become a cliche of the electronic era. "A typical chip cost $100 ten years ago, and today it costs only $5. A typical chip contained 50 transistors ten years ago, and today it contains 50,000." The first 'pocket' calculator weighed more than two pounds and cost $250 when it made its appearance in 1971. A decade later the equivalent device weighted a few ounces and cost $10."

Let's get directly down to the basics: the technical problem. I was a bit surprised that everyone didn't already know about it, but I suppose we each have our own range of concerns in this world . . . and then, too, local exposure varies from one part of the country to another. A very recent survey reveals that 68% of us, nationwide, are not yet aware of the problem. So we'll go into that first...what's it all about?

In the early days of data processing, when business and industry were the primary fields of computer development and the millennium was too far in the future to be of any serious data concern, the convention for handling calculations involving calendar time was to use just two digits for the year, just as we often do in writing dates. It conserved data space and simplified the programming. Quite obviously, no one at the programming level (the only ones who knew these nitty-gritty details) needed any more problems than they were given. And if some were far-sighted enough to deal with it, they hardly had the voice with which to speak for the entire industry.

Many years of development went by before anyone seriously raised the issue of what might happen, in the older programming, when the century turns a page and the data item for a year becomes 00. This, by the way, is a fine object lesson on the limits of our vision, worth thinking about as we play around in genetic and immuno-protective fields (to name but two), Nature's deepest biological preserve.

"What might happen," in the present instance, is that any processing involving the calculation of time between two dates suddenly goes mushy. A span of time crossing the century line (a value calculated by subtracting the earlier year from the later) becomes a negative number in the two-digit situation, and what takes place after that depends on whether and how the programmer has provided for such 'error' conditions. At best, the system flags it as such and goes right on. At next best, the program hangs up and stops running. In either case, something will be done about it -- hopefully with some success, and in reasonable time.

Anything beyond these two scenarios moves into waters of deepening uncertainty . . . into the range of our #2 problem level: interlocking systems. Computer programs pass their output data on to other programs, often to other whole systems, though not always directly or immediately . . . and not infrequently, to other whole jurisdictions. With each successive passage, the false data becomes that much more problematical: more difficult to trace and correct, and more likely to corrupt other processing, in endless waves.

"The technological march had begun four decades earlier, when three scientists at the Bell Telephone Laboratories invented the transistor. Until then, electronics had meant vacuum tubes -- glass canisters that glowed with a Halloweenish orange light. The vacuum tube bled heat and burned out. At the extreme margin of this device's usefulness stood ENIAC, the first giant computer, containing 18,000 vacuum tubes. As John von Neumann, a father of modern computing theory, remarked, 'Each time it is turned on, it blows two tubes.'"

If you know little or nothing about programming, you may not be aware that there is as much art and style to it, as precision and skill. No pair of programmers program alike, even though they may be working to the same end and with identical data. This is -- or was -- especially true for COBOL, my own favorite language as a programmer (it's now in past tense, because COBOL is no longer used for developmental work -- which makes repairative work on those still-operational old programs that much more difficult). It's true of all such work, however: altering old programs is a basically risky and exceptionally time-consuming process. Old programs are like fields strewn with landmines, awaiting the hasty and overconfident traveller. And with Y2000 now as close as July 1, 1997, haste is the game's name.

The programming problem cannot be fully appreciated without some consideration of how deeply we are enmeshed in a world of delivery systems, and how integral to these systems is computer processing. I think there is inherently, among us, a kind of convenient attitude of separation from the world of computer processing. Convenient, to the extent that it supports our personal sense of individualism in a world that has become too automated, too digitized, for comfort. It is the maintenance of a 'distance' from all of it, that we prefer to keep, subverted only by the necessary points of contact: employment for many of us; transit and traffic systems; the internet facility . . . we prefer to think that we dip into the world of delivery systems only as necessary, moving freely otherwise, in a self-empowering world of diverse and plentiful choice. But I am afraid the real truth is vastly different.

However great or little our freedom of choice may be, we are embedded, quite dependently, in delivery systems every moment of our days: heat and lighting systems, plumbing systems, garbage and recycling, fuel and refrigeration systems, communication systems, transit and traffic control, building maintenance, marketing systems for food, clothing and every other commodity, media and entertainment, community fire control, police, parks, street repair, public safety and welfare of a hundred sorts, postal delivery, federal regulatory systems, health delivery, education and schools, libraries, banking, licensing, insurance . . . hey, I'll run out of space, here, if I keep this up. I don't mean to undercut your sense of freedom, but for the purpose of this discussion I'm asking you to momentarily recognize how massively you, each and every one of us, relies on the ongoing operation of such a huge assortment of delivery systems that the mind is boggled by a mere review of them.

And now I ask you to recognize the easily ignored change that has come over this endless assortment during the years of our lives. Almost all of you I connect with were born, and likely grew up, in a world where the work of these functions was done by people, not machines. And since people, in the surface view at least, still mediate many of these functions, we tend to be only vaguely aware of the shift . . . but I'm sure you've all, at some time or another, seen the blank confusion on the face of a sales clerk forced to make change for a bill without the benefit of a cash register calculation. Or run into the "can't-get-there-from-here" obstacle of trying to get some mistaken item in a computer database changed. The human veneer on our delivery systems is very thin, and disconcertingly helpless.

"The vacuum tube could amplify a current and switch it on and off as many as 10,000 times a second. The transistor could do that, too -- but the transistor, a quirk in a crystal of silicon, was more or less immortal. Before long, people would be wearing ENIAC equivalents on their wrists. Humanity had learned how to rearrange sand into computers. Transistors meant miniaturization: Technologists began to descend into a domain unimaginable in a world of levers and gears. The decade that followed the 1947 Bell Labs announcement of its invention saw the transistor appear in hearing aids and cheap and reliable radios."

The quiet but momentous change came about in that approximate quarter-century between 1955-60 and 1980-85: our entire commercial and governmental world, along with all public utilities, went through a full transformation, from hand-and-head processing of data to whole-system electronic processing. In 1964, I got my computer training with Alameda County, in the Bay Area; I worked on the development of public health statistical programs, on grading for the school system, and similar isolated developments. Elsewhere in the department, they were developing a Police Information Network which later linked into a statewide system . . . and twenty years down the line, I was stopped by a local officer in the Seattle area, who could instantly inform me that the California vehicle plates on the car I was driving were no longer valid! For better or worse, our lives are now circumscribed by interlocking systems.

More than one person who responded to my email pointed out that the great bulk of the world's people live in lands that are free of our technology overlay -- living, thereby, happier and simpler lives. And that it could be a good thing for us to get back to that level. I feel very much the same; but the point of concern, here, is that -- like it or not -- the sudden fracturing of our multiple systems would be not a sudden blessing but a crippling experience for us. One fraught with a good deal of peril, on account of the massive interlocking of all our systems. It is one thing to knowingly and tentatively check out the happy, productive world of a simpler culture, but quite another to cope with the culture-wide shock we'd necessarily have to confront in a compound systems collapse . . . IF it should happen.

I admit, it seems incredibly absurd that two taken-for-granted digits left out of a processing routine could possibly lead to such havoc. And yet, isn't it always the commonplace thing, the most unlikely element, that congeals our disasters? Hubris is displayed in the things we overlook. One of the reasons I'm inclined to take this seriously is the very absurdity of how it arose. It is almost as if the gods set a trap for us: had we begun the shift to bitstream automation, in our data systems, either twenty years earlier (in the 1930s) or twenty years later (in the 1970s), the centennial bridge would likely have been prominent enough in consciousness for the hazard to have been more readily seen. When Orwell wrote 1984, it was a year too far away to cramp his imagination, or that of his readers.

"It was in 1961, however, that the destiny of the electronic era began to reach fulfillment, when the first microchips -- integrated circuits combining three or four transistors and a half-dozen other components in a tiny, solid, manufactured block -- reached the market. A half-million chips were sold in 1963. By 1970 the number was 300 million. Chips were transforming the American space program, the television, the calculator; soon they would pervade the wristwatch, the oven, the automobile. The most trivial everyday processes were waiting for chips -- masters of timing and control. Carmakers had never quite managed to build a reliable intermittent setting into windshield wipers; electronics suddenly made the problem trivial."

And then, of course, there is the microchip problem. These are simply encapsulated, event-triggered programs, hard-wired to do a specific small job or portion of a job, and absolutely impervious to any change. As just noted in the above quote from Gleick, every sort of electro-mechanical equipment made today, from hand-held cell phones to giant cranes and aircraft -- not to forget the great engines of destruction still sitting in their silos! -- has microchips residing in its circuitry. IF the invalid date process is part of that microchip, then there it remains for the rest of the lifetime of that microchip -- impervious, also, to prior discovery. It is impossible to detect the faulty chip until it reveals itself by what happens to the device that contains it. In tests already applied to factory equipment, some units have just stopped cold when the turnover date was fed into them. In one such case, an entire security system clamped down, allowing no one to either enter or leave the building facility!

At a Texas Y2K conference this August, it was estimated that somewhere between one percent and ten percent, of the 40 billion chips that have been made, have the date problem coded into them. That translates to as many as 4 billion timed breakdowns waiting to happen (or as few as 400 million, if optimism is your cue) . . . and let us just hope that it's no worse than stoppages.

Is this Chicken Little talking, here? Am I in some soft-headed paranoid fantasy-land? Or are those who would think so -- the nay-sayers -- asleep in their own woolly lullaby-land of desperate security. I'm afraid the question has to hang in the air -- over our heads, as it were -- for another year and more, which will either be a time for preparation or a time for increasingly polarized resistance to the prospect, depending on which camp you fall into.

Like it or not -- and whether you agree with this statement or not -- we are doomed, at the very least, to living out what's left of this century in one fantasy-land or the other, for nobody knows (or can know) which stance will ultimately prove the more realistic. And while an informed fantasy may be no better in the end than a totally imaginal one, it is less likely to make a passive victim of you. Preparation, itself, though of limited potential in a field of massive unknowns, does have the vital advantage of increasing one's confidence in the face of those unknowns. You can be a Winner, if only assuredly in your own resolve.

"Chips became a familiar visual icon: Blown up for photographs, they revealed circuits laid out in a rectilinear grid like the street map of a futuristic city seen from miles above... Chips were machines in a new incarnation. Devices imbued with electronics seemed less mechanistic, less predictable, more magical, and more soulful. They embodied knowledge as no machine had before -- for the real medium of the electronic revolution, it was now clear, was information."

All of what I've said so far is only the stage-setting for problem level #3, the social problem that will begin to take over the scene sometime in the course of 1999, probably rather late in the year. If you think Lewinsky has occupied the daily media, wait until you see Y2K, about this time next year, when there are just a few months to go, and challenges to "get real" are flying thick and fast from every quarter. And if you think this isn't going to unnerve the reading/watching/listening public . . . well, you'd better lay in your aspirin while you can, even if (especially if) you make no other prep.

This is the level at which I have the most concern. Things will definitely go wrong as we enter the new century/millennium -- I think it's not really a question of whether, but of which, simply because the odds against a completely clean passage are too astronomical. And it is quite reasonable to assume that some really serious things will go wrong -- things that will not likely get a quick or easy fix. We won't have known exactly what to prepare for, and we are bound to get caught short on things . . . maybe too many at once, or something too critical that we simply hadn't thought about, or maybe things beyond our range of personal resource. This is the setting in which panic can generate. If not in our own quarters, then up the street, or down the hall in another apartment. I live among 38 other tenants, here, and I know most of them.

Mary, a sweet and gentle old southwesterner, gets distraught, even these days, if her children don't pay a regular (like daily) visit. Mr. Byk, the Russian immigrant who lost a leg to diabetes early this year, counts on the care unit that takes him regularly for medical appointments; his English is rudimentary, at best. There are one or two with drinking problems if things get too intense. And tenants on this third floor, and above on the fourth, unable to easily navigate stairs, who would simply be stranded if the single elevator failed its dedicated mission -- a microchip possibility. One doesn't live in this building without coming to realize the vulnerability of others.

I could be just an observer -- my favorite life stance -- if it wasn't for the fact that I'm involved, now, with these people. But it's really no different than for those of you who have families, and a sense of responsibility for what happens to them. You'd be fools to take this too lightly, when there is time enough to get ready for it. The bottom line is that things will go wrong . . . and nobody knows what, or how badly. It's the sort of uncertainty that insurance companies have built their business on, but there are no policies to cover this one. We'll more likely see specific disclaimers issued, for existing casualty coverage, as the time narrows. It's really up to us ordinary people to see this one through.

"When a technology gets 50 percent faster or 50 percent smarter, the result is usually just a faster or smarter technology. When it gets ten or a hundred times faster or smarter, the result can be a phenomenon altogether new and unpredictable. No twentieth-century technology illustrated this rule more dramatically than the computer. The technology began in two small niches of industrial civilization: business accounting and scientific calculation. Who else would need to compute? Even within science it was not immediately obvious who would need the ability to carry out thousands, millions, or billions of arithmetical operations each second. Astronomers were first, then artillery designers ...then... weather forecasters?"

I don't think I'm being unreasonable or extreme. We have contingency disaster plans in place all over the country, variously for earthquake, fire, flood or weather extremes, and they are hardly thought of as doomsday forecasting. Right here in Seattle, the city goes through spasms of sudden realization, whenever a minor shaker rocks us around a bit, that "the Big one" could come along anytime. Nobody thinks it a far-fetched recognition.

So why all the denial on this thing? Perhaps because it's an entirely new kind of situation, and one that disconcertingly blurs a very old line of separation that I think we instinctively recognize, between the sacred and the profane. Unavertable disaster has always, up to now, been the realm of the gods, while science has made its mark with the sure, the manageable, the safe harbor of predictability -- and as need be, the pre-emptive response. We have no room for widespread catastrophe in our conceptual framework, except it be from Natural disaster, or in the working of pure chance. Now, we're presented with a strange hybrid: a creation of science that bids to escape our control, and for some brief or lengthy period become a scourge.

The denial, then, is our instinctive reaction. The danger is difficult to allow, for it "can't be happening" in the zone that we have come to rely on as within our control. Naturally, "our control" is not personal -- it hasn't ever been -- so those of us who fall instinctively into this denial pattern find no need to worry about any lack of personal ability to contain or resolve the problem. Without any further consideration, it becomes "they" who will fix it, pick up the pieces, take care of all the loose ends. Which is precisely the horrific potential that lurks in this sort of situation: In a worst possible scenario, the "final solution" for a society that has lost all sense of community is that it goes down while everyone waits for the "they" who never arrive.

I'm sure that "they" will be doing increasingly more, as the months go by, to fix what they can, and prepare for what can't be fixed by the due date. But there is going to be some residue -- and while the word implies smallness, we have no reason to suppose it -- some residue, that will be ours to handle. And the sooner we "get" this, the better we're going to be able to handle it. And the more comfortable we're going to be with it.

"By the end of the 1980s, however, computation had exploded outward from these niches to invade every aspect of quotidian life. Electronics did virtually no work, directly; this whole realm of technology was rightly said to be the successor of not the steam engine but the clock. Its strenths were timing, control, and the manipulation and accumulation of information. Those were strengths hardly any machine could afford to be without. Automobiles, washing machines, telephones became electronic. We could hardly see the transformation. So many of the changes were behind the curtain or inside the black box."

That's the end of the Gleick passage, and just about the end of what I have to say on the subject. I did want to mention the interesting thing that turned up for me in the responses of those (a dozen or so) who replied to my email. The women among them seemed about twice as likely, as the men, to see the problem level I was trying to point out. Not the technical side, but the social. Men, by and large, remained focused on the technical issues.

Now, it could easily be said that there won't be a social problem if we fix the technical one. But I am more inclined to understand the difference in perspective as an index of people's response orientation to the problem. The technical issues are not for us to deal with, but the social issues definitely are. At least two of the women who responded have already embarked, in their communities, on the nitty-gritty process of doing something about it. Which is exactly the way I'm positioning myself on it. My prime concern is the community here in my building; but I also have some influence among a dozen or more other senior residence facilities in this system. In order to make it work for them, however, I'll be putting my persuasive efforts to work on the local Housing Authority . . . and if I am effective at that level, it could well extend to some 22,000 residence units, citywide.

To what end? Again: the basic makings of self-empowerment, in the face of a situation that threatens to render them powerless. If they can be readied to withstand 30 days of being thrown on their own resources, with a heightened sense of community into the process, I'll feel that I've put the 15 months left to us, for this, to the best possible use.

Listen to Rachel, emailing me, now, from northern California: "I'm finding the thinking that makes most impact is not when I talk about PG&E or federal things, but the police chief saying he's not doing anything extra, or the water company saying they'll have extra men on hand to turn the valves by hand . . . and people suddenly hear that government isn't going to do it for them. Or that the Co-op shelves would be bare in under two days, as would Safeway -- and those are direct quotes from the chief and water supervisor and Co-op mgr, and Safeways corporate office, and they said it to me."

Rachel is one of about a dozen friends to whom I am now forwarding useful information, as I pick it up from several good web sites. One of these carries a daily list of links to Y2K articles in the worldwide press -- anywhere from 20 to 50 such for each day of the week. I'm trying to glean the material that bears on the social and community aspects, and send it along to those who can use it. Let me know if you'd like to be on the list, as a recipient. Whether or not . . . you had better start thinking about your own role in this developing situation. It is something all of us are going to live through. (Hopefully)

 

3. Send response