Category Archives: Technology

Escalating Ignorance in the Information Age

Oxymoronic?  Perhaps, but true.  The more ‘information’ that we have produced in the past forty years of networked information systems and the internet, the less we seem to know or trust. We are in an era of information entropy in which more is less.

I remember six years ago when an acquaintance of mine mentioned that she did not have cable t.v.  I wondered how she could possibly keep informed of current events.  Two years later, I dropped it myself, never regretting my ‘loss’.  Subsequently, I have become progressively more selective in my reading, particularly on the web, finding  that much of what I have consumed provides less insight.

The information age has provided a wealth of data, but not a corresponding wealth of insight. Why is that? Let’s review.

  1.  Reality is changing at warp speed. Yesterday’s facts and truisms are being rapidly  rendered obsolete. This ain’t your granddaddy’s nothin’!
  2.  We are producing mountains of data, but proportionately less ‘information’ (remember: data and information are not the same) .
  3.  The information that we do produce from the data  is often without meaningful context or perspective, and therefore of limited utility, relevance or reliability in a world where context can change as quickly as facts, and perspectives proliferate.
  4.  The institutions and information intermediaries (the press, government, academia, science, professions , unprofessional organizations such as Facebook and Google) that we depend upon for reliable and trustworthy information have almost all been diminished by scandals as they have become ‘monetized’, or otherwise compromised directly or indirectly by economic forces which have bent their values to serve other objectives.
  5. Concerted efforts to distort or undermine or repudiate otherwise valid information have been refined and deployed with devastating effectiveness.
  6. We have become conditioned, if not programmed, to suspend, if not avoid, critical thinking in preference to simple or comforting dogmas, also known as ‘thought on auto-pilot’.  We have willingly become prisoners of our own illusions, or those which too many are willing to sell us, in a world where there are now too many factoids to make sense of very much for very long.

One of the interesting consequences of all this is that in many subtle ways we take more time to do things that once seemed so simple, or to make decisions that are now more difficult in an increasingly complex world. I remember standing in the soap isle of the local supermarket gazing at the various offerings of dishwasher detergent.  There before me was New and Improved, Extra New, Super Improved, and You Won’t Believe Your Eyes, all in similar but different containers by the same manufacturer, all at nearly the same price. Along came a lady who engaged in the same exercise as I.  After a few minutes, we looked at each other and asked ‘What’s the difference?’.  I could just grab one off the shelf and be done, but I’ve been programed to optimize; best value for the price. Ultimately, I just grabbed one off the shelf.  Now multiply this simple example across the plethora of shopping transactions. Recognize that this phenomenon applies to information as much as dishwasher soap. The default for decision gridlock is snap judgment which often leads to the unintended consequence of buyer’s remorse, and the oft resultant lament: ‘What was/were I/they thinking?’

At a higher level of consequence, business and governmental decisions become similarly captive of a world that is devolving from long assumed perceptions of homogeneity to ever more complex and finite sub-groups, sub-cultures, sub-markets, sub divisions; each with their peculiarities and potential risks to the unwary, and few of which we really understand.  Middle east peace? Climate Change? Healthcare policies? Renewable energy strategies? Transportation strategy? Tax reform? Nuclear energy?

So here we are at the pinnacle of the data-pile at which our economic elite, blessed with all the raw data and algorithms they possess, are risk averse to investing their parked trillions for fear of risks they cannot effectively define, and therefore cannot effectively hedge.

And our ‘intelligence services’ with their army of server farms cannot pro-act with reliability; only react once the threat has manifested itself.  You don’t need big data to set up a sting for the witless.  But all their data isn’t helping them to preempt the wily.

And government, which is more reactive than proactive by nature, works on old and fragmented systems evolved from  the vastness of its enterprise and the granularity of its operations as defined by ever more complex regulations; systems which are too big, too complex to upgrade, but too critical to let die.  This also applies to large corporations, which are bureaucratically not too far removed from government.

I do not consider myself an information Luddite.  By virtue of the very nature of my profession, I love good data; I crave good data; I pine for good data; but I also distrust all data until its reliability can be proven.  More is not necessarily better.

Our data and its infrastructure is steadily holding us captive while we perpetuate the delusion that it is setting us free.  Unwilling to accept this possibility, we double down on our bet on artificial intelligence (AI) as the means to master the data-pile and set us free. No doubt, AI will bring many advantages.

But it also holds the risk that in seeking to outsource our thinking and judgements to so-called sentient machines, we are inviting a concentration of power (think Amazon, Facebook and Google) and a potential for manipulation that enslaves rather than liberates us.  Given our own individual and collective imperfections as citizens, professionals and societies, is it reasonably plausible that we can create AI that transcends our manifest imperfections and biases, but is vastly more capable of the harm we can already do without AI’s assistance? Stated more simply, can imperfect humans create perfect machines, or merely machines more capable of leveraging our imperfections to greater consequence?

We need not look far to preview the risks. Darkness is descending as the Trump administration seizes the reins of power and systematically draws the shades on the windows of government.  Today it seeks to withhold information; to render us less informed. Today, as it has for the past two years, it perpetrates blatant lies, increasingly devoid of any subtlety, to propagate its world view.  Imagine what it might do once it has implanted its partisans where all the levers of information creation or influence are located.  Consider a modification of the adage: ‘To err is human; to really screw up takes a computer’.

The possibilities are exponential.

Happy Presidents’ Day.  Better ones are coming. Hopefully.




But What If I’m Wrong?

“But what if I’m wrong?”

A novel question, don’t you think?

This seems to be a preposterous question to many experts and people of authority. Their degrees and titles are accumulated like armor to shield them from such questioning by others, and our narcissistic society of recent decades does much to program high levels of ‘self-esteem’ and ’empowerment’ to fill in any gaps in credentials. Can you picture Larry Summers or John Boehner or Larry Ellison or Marissa Mayer asking this?

As an auditor and consultant who has spent much of my time questioning the wisdom of others, I am exposed to the occupational hazard of turning those weapons of critical inquiry on one’s self. Fortunately, it is rarely suicidal, and it can sometimes have the benefit of alerting one to one’s own foolishness before it is brought to one’s attention by others, ….generally not gently.

I engage this question most often, which is generally daily, with the subjects of climate change and energy transition. A recent article on the protracted drought in the western US brought the question to the fore. At issue is whether the western US is undergoing a cyclical drought that has happened before in various cycles and various levels of severity for various extended periods of time, or whether this is the systemic effect of climate change that will not manifest as a cycle, but a trend.  If we wait for a definitive answer, the consequences could be dire for those directly affected.  If we rush to act on either assumption, we stand a risk of wasting precious resources or precious time. In the moment, one bears a significant risk of error, with consequences, one way or another.

The specific manifestation may be drought, but the contextual question of cause is by no means unique to this piece of geography or this particular natural phenomenon.  Climate skeptics and climate change adherents can each marshal arcane data to support their position, or alternatively poke holes in the credibility of the other side’s argument.  Often, neither side can prove or disprove their argument, because neither side has sufficient bullet-proof information. Much of what climate change advocates rely on for climate history is inferential evidence drawn from proxies: tree rings, ice cores, soil cores, etc.  And the more direct and current evidence is either insufficient in time span or insufficient in breadth and depth of accumulation (e.g. ocean temperatures, atmospheric readings at higher altitudes over the entire globe for completeness and uniformity) to be able to have a lock on an argument. The skeptics’ preferred route is divide an conquer: cherry-pick the data that supports the premise, and narrow in on a particular arcane facet to the exclusion of everything else that’s happening.  A recent article on divergent approaches to storm surge and sea level rise further illustrates the dilemma.

The scientific community, which we are told is 97% supportive of the premise that Climate Change is a) real, and b) subject to human influence, is somewhat schizophrenic as a group on the subject of climate change.  On the one hand, some significant chorus of the community is warning us in breathless tones of the impending point of no return in climate system dynamics that will seal our fate.  On the other hand, with each new report on ice sheets, or tornadoes or ocean acidification, or monsoonal rains, or new high temperatures, or whatever, they demure to make a direct connection to climate change because “we don’t yet have enough data to state the case conclusively”.  That may be valid from a perspective of scientific methodology, but it does not sustain the thesis that we “must do something NOW”, even though the apparent trend of events that we all can observe suggests that we probably should. And, it does not sustain the proposition of exactly what we should do now to what attainable effect.

There are four reasons why I can sustain commitment to the hypothesis of climate change in spite of science’s struggle to bring coherence to seemingly disparate facts or conundrums in modeling:

1)  Something’s going on, across a broad range of phenomena with some level of consistency and apparent escalation that it cannot be responsibly dismissed as just another day in paradise, even if we can’t explain it definitively now. (multiple data points)

2)  Even if the science remains somewhat muddled and inconclusive within the straight-jacket of its empirical methodology, the anecdotal evidence from that unruly place we call The Real World is sufficiently diverse in nature, and congruent in basic direction to give comfort that a trend of some kind is developing to which we must pay attention, because the consequences could be such that we cannot afford to ignore.  (Multiple perspectives)

3)  While the scientific community is by no means immune to herd mentality, the breadth of professional specialties and institutions and vested interests who have come to consensus defies credible belief that the consensus is merely herd mentality orchestrated by some world-dominating cabal. (Checks and balances)

4)  So many of the arguments hurled at consensus science by the outliers and their camp followers are predicated on such apparently flimsy logical and factual constructs that they beg dismissal from serious consideration. (Logical fallacy)

But still one must allow that even the minute minority may be right for the wrong reasons. Scientific progress has often been built on destruction by renegades of conventional wisdom of the moment. They cannot be summarily dismissed.

Is the sun a factor in global warming? Quite likely, although current scientific methodology has given it modest influence. Are greenhouse gases the major cause? Quite probably because we know the chemistry of burning carbon fuels beyond question, and we know the physics of their effect.  Unfortunately those physics are not the only physics to be considered in understanding climatic evolution.

So, an open mind is essential, and the question: “But what if I’m wrong”, is a vital tool of self-assessment and intellectual integrity for all players.  But too few seem to use it.

The Question (BWIIAW) becomes particularly important when one’s responsibility for decision-making impacts the well-being of others; their lives and livelihood, their health, their wealth. People who are clueless about climate change are keenly aware of their personal circumstances, and understandably distrustful of those who pronounce with obvious disregard for personal consequences. The manifestation of arrogance and indifference on both sides of  the climate debate is troubling, and explains in large measure why humankind has not progressed sufficiently on this issue.

Nor is The Question exclusive to climate change.  It might be nice for both sides of the fracking issue to try it out.  And genetic engineering. And nano-technology. And technological displacement. And Big Data Analytics. And economic policy. And foreign policy. And medical efficacy. And data privacy. And right to life. And death with dignity. And interventions of all kinds for all the best of intentions. And the list goes on. In a time when Big Data has yet to vanquish great uncertainties, and when judgments in a nanosecond can yield regrets that ‘keep on giving’, we can all afford a moment to ask ourselves ‘The Question’.

Asking The Question doesn’t necessarily give me answers, but it does inject a minimum daily requirement of humility. And as long as a voice in my head does not whisper ‘Sid, you’re probably wrong, or at least on really thin ice’, I can inch forward for another day, and ask again tomorrow.



The Price of ‘Free’

Hi, my name is Sid. I’m a technological Neanderthal.

I have an iPhone 4, and I’m not likely to trade it in for the latest, not even if Apple’s new Marketing wiz comes out with a new model in Burberry plaid.

I don’t have an iPad because the form factor isn’t worth the price of a good laptop when I’m more about information creation than information consumption.

And now I read that Apple has a new Microsoft Killer strategy to give away software because ‘it’s all about the hardware’. A lot of media Toadies appear eager to peddle this nonsense as the new received wisdom.  After some contemplation and a reasonable period of gestation to allow appropriate fermentation in my primitive brain, my considered response is: ‘You people are on CRACK!’ But that’s just my personal opinion.

Actually, it may be the case that the real news is that Apple has finally priced a product for what it is worth in the case of iWorks: nothing. But Apple’s give-aways and their implications for Microsoft are not the real story.  The more interesting one is the battle between hardware and software, and another story, the Price of ‘Free’.

The hardware vs. software story goes back to the late 70s when IBM’s dominance in Big Iron was coming under question. Two things were occurring in tandem.  Mini-computers were arising from the technological primordial slime to challenge the Big Blue Boxes behind the glass walls.  As these less powerful but tactically more relevant platforms made their way into the hands of mere mortals, software packages evolved that were closer to needs of end users.  IBM, which was known for its less than engaging software (think of it as the Microsoft of its time), was beginning to feel the pain. But it adapted, because its dominance was eroding.  IBM may have been big and clumsy, but it wasn’t dumb. It entered and survived and ultimately dominated the mini-computer field.  And with time and patience and the arrival of the Web, IBM lived to see the return of what it knew and loved most: Big Iron.  But gradually, IBM, being a learning organization, realized that a computer is just a dumb, expensive box. It’s only worth what it can do. And software defines what it can do more than hardware. It’s the software, Stupid!

It’s next flash of insight was to sell not just software but service, the gift that keeps on giving. Kind of like a blood transfusion in reverse.  Steadies the revenue stream and keeps the i.v. line open for injection of new services and extraction of new revenues.  This model has now made its way to Microsoft and Adobe who offer subscription software with automatic updates in place of ‘buy and bye’.

I thought that IBM was crazy to sell its PC business, which it was never fond of from the beginning. But it was actually smart. It foresaw the PC business becoming a commodity business, and didn’t want any part of that.

So when I see Microsoft now entering the personal information appliance hardware business in competition with Apple and Samsung and LG and whomever, I wonder if it hasn’t lost its way.  Google and Amazon are peddling cheap devices to hook you into their services; similar strategies for different reasons. Google wants all your information so they can pimp your profile to the world for profit. Amazon wants to install that reverse i.v. so they can suck every possible sale out of your aspirational little soul.

Apple has similar aspirations to Amazon, but not nearly on as grand a scale.  It’s all about entertaining you.  Making you feel empowered and special. And the hardware does that just fine.  And the software does that o.k. because Apple has trained you to not be too demanding.  That wouldn’t be cool! And because it extracts a hefty price for its very sleek hardware with its modest software that makes you feel good,  now it’s giving you the software free out of the goodness of its corporate heart.

So the notion that Apple’s free iWorks et al is going to be a Microsoft killer is about as dumb as the notion that tablets and smartphones are going to kill desktops and laptops.  There is a place for desktops and laptops where people do work. Tablets are accessory to them in the workplace, and have more value in personal information consumption: PIMs on steroids. The current turnover in sales is more a product of marketing once again convincing us that we need the newest of something we already have because we need the newest for our self-esteem.  (Logic would posit that I still have an iPhone 4 as proof that I have no self-esteem. Whatever!)

*  *  *

Let us conclude with ‘free’.  ‘Free’ is a pricing and marketing gimmick.  Always has been. Always will be.  Everything costs, and somebody has to pay. Somehow. We have been brainwashed to expect free without considering the price. In a supposedly sophisticated society we are programmed to salivate at the sight of ‘Sale’ in the Pavlovian fashion.

The demise of Penney’s campaign to replace false sales with low prices speaks to the degree of our social programing in ways that we do not seem to appreciate or want to confront. And as I write this, we are just one month away from our annual celebration of human debasement known as Black Friday.  It will be interesting to see what kind of firepower manifests at the mall on November 29th, particularly in Florida where ‘stand your ground’ still lives and dies. ‘Black Friday’ may take on another meaning. (What we need are more ‘good guys with guns at the Mall’, the NRA will say).

Everything has a price, a cost and a value. Price is supposed to reflect value, and does not always suggest cost.

When the price is free, what does that say of the value?

When the price is free, who’s really bearing the cost? And what is it?  Ask Google, and Facebook, and Amazon, and Apple and Yahoo. They know.

By the way, what would you pay for an Apple roadster in Burberry?  How ’bout with autopilot software thrown in for ‘free’? It’s all about the hardware.



Apple is the New Microsoft

If you infer that the title of this post is not a complement to Apple, you are correct.

Full disclosure: I have never been a fan of Apple.  I grudgingly respect the departed Steve Jobs’ excellence as a marketer. I respect his demand for excellence, which is noticeably slipping since his passing. I did not respect his management style, and I thought his technological chops were grossly over-sold. From the beginning, he had a marketer’s keen sense for the potential of other people’s technology, but he was not a technologist.

For this reason, he provided products with a pretty hardware face, and a clean user interface, but there wasn’t much behind it. Or as Gertrude Stein so famously said of L.A., “there’s no ‘there’ there”.  Which is why Microsoft has done, and still does, the majority of the ‘world’s work’, for better or worse.  Apple made computing approachable for those who really didn’t want to get ‘under the hood’ and tinker with the petty annoyances of Microsoft’s less pristine platform.  And it thereby empowered its happy followers to feel they were more empowered than the Win-tel Neanderthals.

Thus, Apple has come to dominate the personal information appliance market, but not the business information technology market, and there is an obvious reason for that.  The personal market is more hardware driven than software driven.  The software is less complex in scope, and potentially (not necessarily) more stable. Apple’s particular form of design fascism works to its advantage in forcing would-be application designers to conform to its dictates in order to play on its platform. But it has also failed to seduce other apps designers, particularly of business applications, to come to its information Garden of Eden. One might wonder why.

*  *  *

I acquired my first Apple product two years ago when my beloved Treo regrettably was shredded by my lawn mower. (Don’t ask how that happened. It’s still a cause of great humor in my family!)  Palm, Inc. , as some of you may remember, suffered the same fate as Lotus.  As Lotus was acquired by IBM, so too was Palm acquired by Hewlett Packard. As Lotus’ desktop suite, which was superior to Microsoft’s, was abandoned by its acquirer, so too was Palm’s technical essence allowed to wither by its befuddled acquirer. If Palm had lived, I might still be Apple-nostic.

I remain a reluctant captive of the iPhone for the time being, but recent experiences in my interface with the Apple universe suggest that it has at last joined the fraternity of the muscle-bound, brain-impaired behemoth, Microsoft.

First, I made the mistake of updating iTunes in my desktop to the latest version:….whatever.  That installation to my aging desktop seemed interminable, but appeared to go without incident….until I tried to sync my phone to update my calendar and contacts. The new iTunes platform would not recognize the phone.  Incidentally, I had resisted updating my iPhone 4 to the new iOS7, pending debugging of that latest gift to humankind.

I next went to Help to trouble-shoot the cause, and then to the Apple Community for self-help because I was disinclined to pay $19.95 for support that I should not have needed for a problem originating with a download of their update.  The Apple Community confirmed a variety of connectivity problems either with iTunes update, or with iOS7, or between the two.  O.K.

I chose not to deal with the iTunes dilemma on a Sunday afternoon, as my music still worked, and my phone still worked, and I wasn’t in the mood to de-install and re-install iTunes.  But since iTunes was working on its own, I decided to explore the Radio feature which some in the media have speculated could be a Pandora-killer.  Pandora, breathe easy.

I created four stations. In three of the four, the first song played was of the intended artist. The ones that followed were of other artists that I would not nearly associate with the same style of music.  iRadio, in Apple’s infinite wisdom was apparently trying to introduce me to music I didn’t previously know I wanted, kind of like Steve’s other products.  Except, I really didn’t want it.  At first I thought Apple might have reassigned the engineer who crafted its Maps app to the Radio project.  The Radio algorithm seems as accurate as Apple Maps.

Today was the icing on the cake. My wife’s two-week new iPhone 5 would not handle voice calls.  Texting and data access were o.k.  I couldn’t reach her with my iPhone 4. My daughter couldn’t reach her with her iPhone 4S.  But my daughter and I could talk to each other just fine.  My daughter checked the web and verified that there were a variety of problems with the 5 of possible hardware and software nature in conjunction with iOS7.  So I took my wife’s phone back to the Mega-Mall for service.

The service rep at the cell carrier noted that my wife’s phone was on iOS6.something, as are mine and my daughters.  She indicated that I would need to upgrade it to iOS7, docking it to iTunes…. on my computer. You can imagine that I did not greet that prospect with warmth, and explained my history with iTunes.  She said, nonetheless, Apple is forcing users to upgrade all devices to the latest platform for continuity of functionality.

Her explanation had a scintilla of credibility, except for the fact that my daughter’s and my phones were still functioning o.k. in Apple’s nazi universe.  But what do I know.?!  She further indicated that, if there was a hardware problem, it would not be fixed by the carrier’s service support, but by Apple’s.

I returned home with the prospect of dealing with iTunes issues and iPhone 5 issues and iOS7 issues, and paying $19.95 per issue to deal with Apple Tech Support Chat, probably only to learn that it’s all the carrier’s fault because Apple doesn’t make mistakes. (It just fixes its non-mistakes quietly, rarely admitting or denying culpability.  It’s well prepared to deal with the SEC.)

So I decided to go ‘spheroids to the sunset’ and just download the new iOS directly to the phone, contrary to the instructions of the carrier service rep.  IT’S A MIRACLE!!!!!  Voice calls now work again.

So why did the phone cease to function after two weeks, and now functions with a software upgrade?  And why does my 4 and my daughter’s 4S continue to work without the upgrade?  I suspect only Steve knows, and he’s not returning calls.

But I do believe this. Apple and Microsoft and other technology companies in their league care little about what users think of the user-interface and reliability and sustainability of investment or service.  We’re just here to be dragged along in their business model, and milked on a regular basis with upgrades we don’t need, and new peripherals demanded by planned obsolescence, and 600,000 apps, of which 590,000 are trivial to worthless and the rest are so fragmented that many barely create the illusion of productivity in total.

Although Apple is the butt of this particular tirade, it is by no means alone. The consumer and business technology ecosystems are beginning to take on characteristics of the auto industry in the US before its collapse.  This might be a good time for some introspection, following some extrospection by an industry that is becoming increasingly insular.

My wife was not enthusiastic about getting a smart phone. She often said she just wanted a phone that works reliably. Still does.



Appropriate Technology

During last month’s International Festival of Arts and Ideas in New Haven, CT, a participant in a panel discussion on innovation shared an interesting anecdote.  He explained how his company was pondering how to provide a lower cost incubator warming system for premature babies in lesser developed countries.  The high-tech model sold in the US was approximately $23,000 per unit; too expensive and too complex for less developed countries.  After collaborating with an innovations center in its India affiliate, it hit upon an ingenious solution: stripping down the highly automated US unit to a low-tech but just as effective manually controlled unit at a fraction of the price.

That got me to wondering.  If the low tech, low-cost unit works for Indian kids, why isn’t it good enough for US kids?  How much does our obsessive fetish with automated this and high-tech that drive up the cost of service in the US with little or no discernible benefit? To what degree does this mindset account for the fact that we pay a premium for mediocre medical results?

I remember the commencement speaker at my brother-in-law’s graduation from dental school in 1979.  He spoke of how hospitals with state of the art cardiac treatment centers were experiencing higher than anticipated mortality, and longer recoveries to discharge, contrary to their projections.  They ultimately concluded that automating the monitoring of patients to reduce the human contact was having an adverse affect on patients psychologically. Human interaction was as important a component of the healing process as monitoring vital signs.

Then I remembered a video I saw of a Soviet fighter jet that was ranked competitive with ours.  Its maneuvers and performance in flight were impressive.  But equally impressive was the observation by experts that the aircraft could be maintained in the field with a much smaller support complement, and could operate from less developed facilities, making the system easier to deploy to a broader array of combat theaters.

Which brings us to Iraq-istan.  We built ever bigger vehicles to protect our forces from ever more potent, but comparatively primitive, lethal devices, modified on the run by people not prepped at The Point. Why are we trying to build indigenous armies to look and operate and equip like us, when our tactics didn’t win the wars, and they are unlikely to be able to sustain an army in a format we can no longer afford ourselves?

Then there’s the Navy’s Littoral Combat Ship, which was recently reported to be of questionable combat capability. At $670 million a pop, that’s not reassuring, but for the DOD, that’s not unusual.

Then there’s ‘Big Data’, which I have spoken to previously. There are appropriate places for Big Data. But is it a need, a niche, or a rage? Is it a security blanket to cover lazy analysis and a paucity of insight with an impressive pile of data that reveals little?

No doubt, there are marketing opportunities for big data to identify targets of opportunity with the precision of an electron microscope, but how much is the supporting investment worth in aggregate in sucking blood from the bloodless of the broader economy?

A marketing professor once noted a survey of CEOs revealed that most doubted the value of their advertising in affecting sales, but they nonetheless felt compelled to make the investment in it.  Faith based management.  The constant struggle of the data parasites to gain more ‘insight’ into the market and attract more web advertising seems more like a churning of possibilities for would-be advertisers than achievement of measurable results.

In my own professional pursuits, I have often been dismayed to observe how little we use of the technological capabilities we have.  For example, most professional business people use Excel as little more than an electronic piece of paper.  They are substantially ignorant of most of its functionality to make them more productive, and not highly motivated to learn.  Do we need a better Excel, with more options on the ribbon? A different format? More intuitive? A dab of artificial intelligence to anticipate our needs and fill in the logic as well as the blanks?  No. We need to learn to use what we have to better effect.  More managers are adept at using PowerPoint than at using Excel or various database applications that create the content for the presentation. More focus is placed on the image of presentation than the substance of what is being presented. Is that where the focus should be?

And apparently nobody does Big Data like NSA? Does it make us safer? I doubt it. Sweeping everything into a gargantuan pile gives a delusion of control; being able to drill quickly and reliably to the critical core is what matters.  If our intelligence services are so capable, and so well equipped with the latest data crunching technology, why is it that we are continually blind-sided by international events that they are so meticulously scanning?  Wrong data? Wrong systems? Wrong management perspective? Clearly, we are not getting enough ‘bang for the bytes’.

Eventually, our pursuit of technology becomes an unsustainable defense of technology’s promise more than an attainment of its productivity. The perpetual upgrade cycle is a hook, not a destination. As true improvement becomes tougher to achieve, superficial refinements mask mediocrity as progress. Marketing supplants R&D. PT Barnum (“There’s a sucker born every minute, and two to take him”) becomes the management guru of choice.

We are now witnessing a victim of this cycle.  Apple has become a mature company.  It is now more a competitor than an innovator.  It has critical mass, momentum.  Like Microsoft. It can cruise for a long, long time, doing its thing. But it may have passed its moment as ‘disruptor’. Now, as it moves through the tech jungle, bloated with its success, and financially insulated from the immediate consequences of lethargy, myopia and bad decisions, it must wonder what small microbe could bring it down.

*  *  *

By now I hope you get my drift. I’m no Luddite (or so I’d like to think). I love data and respect technology, but I question the pay-back in many instances. Big Data does what only Big Data can do in gene sequencing and climate change modeling.  The Mars rovers and Hubble space telescope are truly remarkable machines, evidence of our best capabilities put to appropriate use. I hope that someday, some large-scale particle collider will uncover mysteries of energy that will make desk-top fusion and unlimited cheap, clean energy a reality.

But I question the wisdom and benefit of constant turnover for the sake of turnover. I question disruption for the sake of profit without progress.  I don’t suggest that we return to some imagined ‘golden age of simplicity’. But we do need to put the brakes on neurotic ‘innovation’ for its own sake.

Or, putting it another way, we now have over $4 trillion in infrastructure investment deficit in the US to upgrade sewage, water treatment and distribution, roads, public transportation, bridges, power generation and distribution, reliable and modern communication to areas lacking it, basic healthcare to those with none, basic nutrition to those with little,…..

Do we really need a better iPad?



Dissecting the Skills Gap

A recent editorial challenged the assertion of corporate chieftains that there is a skills gap in the US domestic workforce that requires remediation by more government training programs (ironic from those who rant against government intervention on all other fronts and taxes in general) and looser immigration. It contains a number of truths, but falls short of the full story.

The main truth, as noted in a previous rant on the subject, is that employers (specifically larger corporations with pricing power and options in the labor market) are squeezing labor for the greatest price concessions (wage and benefits) they can extract, to the ignor-ance of all other collateral direct and indirect costs.  What are the direct costs? Reduced loyalty and motivation, higher turnover, fragmentation and inflexibility of the workforce through outsourcing and bottom feeding (since the more capable candidates will likely go elsewhere, if there is an elsewhere).

And let’s talk about efficiency.  In a recent extended engagement with a global corporation, I had the opportunity to deal with programmers in India and help desks in Costa Rica on numerous occasions. We often hear the blather about how efficient the global workforce is because digitally networked ‘resources’ around the world can be working a  problem 24/7, rather than just 8 hours a day, five days a week, thus compressing turnaround times.  Yeah.  Another fantasy of C-Suite crackheads.

I dreaded dealing with Indians on programming issues.  First, verbal communication by phone was tortuous due to accents (and I was not the only one suffering this problem, but I will allow that it probably cuts two ways). I had to resort to written correspondence in many cases to assure that we understood each other, which often slowed the process but improved the result. Further, because of the time zone difference, I never knew who I would be dealing with on the other end depending on the time of day.  It seemed they worked in teams, and I wondered how efficient the coordination and hand-offs were among them.  (Interestingly, and by contrast, I much preferred dealing with the Costa Rican help desk.  Although English was their second language, we generally could communicate much better verbally, and we were in the same time-zone.)

Another interesting observation on efficiency had to do with a compliance program for which the Executive gods wished to hand off back-office tasks to the low-cost overseas sweatshop (pardon me, service center) run by a major consulting firm.  After much back and forth, the consulting firm conceded that it could not meet the Company’s requirements because their off-shore talent would not likely be able to grasp the nuances of the compliance requirements.  It’s not that the off-shore folks were stupid.  It’s that the ‘soft’ requirements of the program were heavily laden with cultural inferences that were alien to that workforce of reasonably well-educated people who would accordingly fail to pick up on precisely the subtleties that the compliance program needed them to detect. No fault of theirs. No fault of ours. Culture matters, and profoundly impacts communications in a world that is still far from completely homogenized.

Most of these inefficiencies are soft, and fragmented, and not easily quantitatively accumulated to a net provable conclusion, but they are nonetheless real.  The fact that you can’t measure a problem doesn’t mean you don’t have one.

As for the indirect costs, this is best seen in at the lower end of the food chain with immigrant labor. An employer who succeeds in hiring cheap immigrant labor to boost his/her bottom line has hired someone in too many cases who, by virtue of that fact, cannot afford decent housing, decent education for their kids, decent food, any health care; in essence someone who becomes by various avenues a burden on the greater society to fill in the gaps in their economic life created by their ‘cost conscious’ employer’s narcissistic business model.  In other words, and contrary to the popular meme, the employer is the free-loader on society, not the immigrant who is attracted by the employer’s quest for ‘cheap’.

*   *   *

The important point, which corroborates one of the arguments of the editorial, is the tendency of business in general to pursue narrowly defined parameters of ‘success’ with concerted disregard to collateral costs to themselves and others.  The more blatant efforts are cost transference without benefit transference, and it is not limited to labor.  The reductions in quality assurance and related product reliability transfers the the consumer the risk and cost of product failure and attendant consequences. The substitution of automated customer service for human customer service saves cost for the company, maybe, but costs time for the consumer as well as increased dissatisfaction with response, which is another of those soft costs that is difficult to quantify. These are all manifestations of a corporate mentality of ‘me first and to hell with the rest’; by no means universal in the US, but most definitely prevalent to a disturbing degree.

*  *  *

One of the greater problems that the editorial barely touched upon was the issue of complex requirements that cannot be realistically or easily matched to talent.  Some of this is, as suggested, wishful thinking on the part of management.  But some of it is the unintended consequence of the technological ecosystem that corporations are creating for themselves.

In one of my engagements, I went through an on-boarding process of obtaining system access clearance to the relevant systems I would need.  I was presented with a staggering list of systems and utilities that comprised the corporate information ecosystem. The menu crystallized a growing, subliminal perception that in many cases we are creating automated ecosystems that are beyond our organizational capacity to manage.   Of course, no individual could possibly manage all the systems that I observed on that menu, but how many individuals must manage some significant subset of these tightly integrated systems?  And how easy is it to replace one of these individuals with someone having the precise, or closely compatible, skill-sets?

This problem is not confined to the IT black box.  It extends into the entire organization which interfaces with the information infrastructure in varying degrees, which means virtually everyone (except for the C-Suite types who will have their drones do the burdensome data harvesting.  That’s why God invented PowerPoint.)

In Management’s quest to substitute technology and squeeze out the ‘costly’ human resource to the greatest degree possible in the mindless pursuit of economy and increased productivity, it has made itself dependent on an ever smaller base of talent to meet its ever escalating demands. The only thing dumber than this strategy is the expectation that the public education establishment can anticipate and meet the Corpocracy’s ever-shifting labor skill set demands, and save them the burden of paying their way.  Talk about ‘entitlements’!

*   *   *

As a result of this evolving techno-ecosystem, corporations are forced to pursue specialists with excruciatingly finite skill-sets.  What they really need is more generalists.  As a brief but relevant personal digression on this theme, in the course of my career I have avoided specialization and remained a generalist, contrary to conventional career wisdom.  This springs from my observation as an auditor that most operational and performance failures in processes occur not within organizational units or systems, but between them.  It is the connectivity of operations between specialized points of responsibility rather than within them that contributes to failure of the whole in many cases.

This is a probable consequence of silo mentality that comes with intense specialization in absence of a generalized comprehension of the whole. As corporations continue to obsessively pursue specialization without creating the necessary complement of generalists who can perceive the whole and integrate the pieces, they will suffer further degradation and technological entropy.

One of my advantages as I have moved from client to client within various industries has been to bring a generalist’s perspective to each, and then pivot to its specific needs.  I have rarely suffered from a lack of specialists whose knowledge could augment my limitations, but I have often had to supply the glue that was missing among their various specialties.  Very often, that glue was not any degree of native brilliance, but the application of basic accounting and business control concepts.  I would suggest that the fundamental need of business is not to nurture more specialists but to recruit more generalist who they can train up to their specific needs.  Specialized expertise is ephemeral in today’s business environment.  The ability to learn and adapt is a renewable resource.

If the problem of mis-matched skill sets in the labor market was merely one of delusional expectations and mis-informed corporate strategy, it would be bad enough.  But I suspect that in the US, it is worse. It is a part of symbiotic cultural dysfunction on the part of management and labor that has evolved over time and will prolong the agony of the Great Repression beyond reason. More on that in my next post.



Is Clim-Ergy like Y2K?

Yes.  But not for the reason climate deniers allege.

Climate change deniers and peak energy cynics often claim that these contemporary issues are urban fantasies, hoaxes, or, worse yet, conspiracies of gullible Chicken-Littles and manipulative one-worlders to undermine paradise as we know it. The supporting ‘fact’ of this analogy is that Y2K, for most people, was a non-event. If you weren’t an accountant or an I.T. professional or in the information bowels of a major corporation, it probably seemed like a non-event.  Gratefully, the world did not end at Midnight. Indeed, it barely seemed to skip a beat!  Case closed.

Except, that’s not really as it was, and the similarities between how society handled Y2K and how it is handling Clim-Ergy should give us pause. Allow me to elaborate.

I first heard of Y2K in 1975, while working as an internal auditor for a major multi-line insurance company.  As we were concluding an audit of data center operations, the I.T. auditor on our team was telling me how technology was likely to change over time. He mentioned the Y2K dilemma in passing with the memorable closer that “by the end of the century, most, if not all, the affected systems will be replaced by then.”

Didn’t happen.  The next time I encountered Y2K was 1995 when, by chance, I read a journal article on the risk and the state of unpreparedness facing the world community.  I was quite surprised by this revelation, and a bit disturbed.  First, I knew how long it takes major corporations to implement a major IT system on a good day, and that putting a gun to their head for accelerated implementation generally goes badly.  Second, I knew that, contrary to my colleague’s firmest expectations, there were still a lot of legacy systems surviving in major corporations, supporting pretty front-end interfaces with old COBOL code held together with the electronic equivalent of chewing gum and baling wire, while supporting applications programmers hummed mantras invoking higher technology spirits to keep the crappy code running ‘until we can replace it.’ Third, a whole new platform of software had evolved in the Win-tel world with the introduction of PCs into the business environment.  Did they bother to avoid the problem? No. They followed standard practice.  So much for heads-up, progressive remediation.

For the next two years society in general and the business community in particular appeared to be playing ‘Who’s On First’. If there was a hero in the Y2K drama, it was Alan Greenspan.  About three years before ‘The Event’, he decreed that any bank which was not Y2K compliant by a specific date would be closed or merged with a capable institution.  Finally, someone in a major sector had taken  charge with a clear deadline and explicit consequences.

What followed next was interesting; a cascading series of epiphanies among various industries and institutions. The insurance and investment communities fell in line with the Fed. The SEC required public disclosure of anticipated Y2K costs and the registrants’ capability for timely remediation of issues. The legal profession smelled potential blood for litigation, and insurers devised new liability products to manage risk. As individual companies became comfortable with their own level of remediation and survivability, they began to realize that they were only as secure as their supply chain and customer base, and so began seeking written assurances from business partners that their systems were also Y2K capable.

There were two tracks to the Y2K dilemma.  One was the transaction track involving systems that process monetary and operational transactions and events in which date fields are critical.  The other was the embedded chip track. Embedded computer chips were made in the millions by thousands of vendors over the course of three decades.  Some were custom chips designed for a specific machine with specific capabilities.  Many were ‘generic’ chips, off-the-shelf devices with basic capabilities that could be programmed in a variety of applications for a variety of purposes. Many of those chips had date capability, whether it was actually used in a specific application or not.  Documentation was not always good.  Many manufacturers had gone out of business or were acquired over time, with technical specifications lost or irretrievably buried in archives.  Problem: which chips having which capabilities were alive in which critical applications?  Like heart pacemakers, nuclear power plants, aircraft, ya-di-ya-di-ya?

The typical corporate public response was “No Problem!”  But privately many corporations struggled to identify and isolate potential risks, and to devise contingency strategies to deal with failures and maintain continuity of operations.

i sensed that by August, 1999, business in general felt it was pretty well prepared with remediating transaction systems, and had isolated areas with embedded chip risks to the point that they could respond with workarounds  that could contain problems.  But nobody could know for sure until The Ball dropped at Midnight.

Was Y2K real? Yes, but it was contained because it was successfully remediated by society as a whole at the last-minute.  Whatever glitches may have occurred were minimal or were effectively concealed from obvious effect.  I personally encountered three Y2K glitches; two before The Event, and one after.

One of the glitches had to do with the run-and-gun implementation of an ERP system by a client.  Because of the haste of implementation, it was largely installed Off the Shelf (OTS) with minimal customization. As a result, some functions and data fields were not effectively mapped between the old system, which had to be retained, and the new system; a costly but necessarily redundancy. My role was to devise a bridge in Excel between the old data format and the new record format, and keep it working until the software vendor could install a patch. Four months of billable time for one month of incurred work, because the client was in a critical business, and chose to keep me on stand-by.

In fact, one of the other effects of Y2K that was not widely reported in the general press but was acknowledged in the business press was the need for remedial consultancy after The Event to fix numerous serious glitches in ERP systems that were implemented in haste.  While much was made of the dot-com implosion and its impact on the economy, I would offer that an equal drag was the post-Y2K remediation costs, and business opportunity costs resulting from the diversion of Y2K resources from more productive applications.

*  *  *

One of the technology wizards at consultancy Cap-Gemini projected that Y2K would cost the world $1.3 trillion to remediate.  Mind you, that’s when a trillion was real money.  Today it will only buy you a one-country war without remediation. I didn’t run the official adding machine tape accumulating the actual costs, but the number sounded credible based on what I knew major corporations and institutions were reporting for expenditures (and I assume that in many cases, the majors were under-reporting for the same reasons that no company likes to discuss its security breaches). I would guess that the total breaks down as follows:

– One third went to fix the intrinsic problem.

– One third was paid in premium rates and overtime to old dis-interred COBOL programmers to fix existing code, or to big ERP software providers for run-and-gun last-minute implementation of ERP systems when management concluded that timely remediation of existing software would not be possible, or would hold great risk.

– One third went to contingency planning, generators in case of power failures, overtime for all-hands-on-deck drills and event staffing, audits of remediation efforts, liability insurance, legal work, revenue lost from curtailed operations on the eve of The Event.

In other words, about $800 billion of worldwide cost, was for nothing.  If we had addressed the Y2K dilemma in a responsible, proactive manner over the twenty-five years between the time I first heard of it and the time the ball dropped on Time Square for The Event, we could have saved enough money to fund another small war, or maybe infrastructure improvements.

But here is the more fundamental point of this exercise.  Humankind created the technology that led to Y2K, and failed to control it until the last-minute. Leadership procrastination and arrogant complacency led us to the brink of a potential calamity.  It took the imminence of a crisis to galvanize organizational action that could have and should have occurred much earlier in the supposedly rational managerial mind.  Does it sound like any other quagmires that come to mind.

Now here’s where Y2K differs from Climate Change and Energy Transition. We created the information technology.  We understood its mechanics and impacts for the most part, except we kind of got sloppy with the embedded chips.  By contrast, we did not invent climate, and we barely understand its mechanics. We proceed to impact a system we do not yet understand with the same arrogant complacency about consequences.  Similarly with the energy transition in general, and fracking in particular.  We are replaying the institutional incompetence and irresponsibility of Y2K with these current paradigms, and let’s not forget our management of the economy as a whole.

Nor should we stop with Clim-Ergy. How about nano-tech, and genetic engineering, and geo-engineering. How many of us really believe that these technological frontiers are any better managed by our corporate and political gun slingers than was Y2K?

Or how about cyber-security? Do you believe that the institutional corporate sensibilities that led to Y2K, and that have perennially short-changed investment in computer security over the evolution of this technology are any better equipped today to protect their enterprises and the greater society’s stake in them from criminal or nation-state assault?

If this assessment sounds exceedingly cynical, at least it is founded on ample precedent.

*  *  *

One group of computers did not suffer risk of exposure to Y2K. Those were Apple p.c.s and systems running on variations of the Unix operating system. But while they were not vulnerable to Y2K, there is some concern that they face a comparable dilemma in Y2038. This too was known in 1995, but recognized at the time as a ‘deferable issue’.

Not to worry.  We’ve got 25 years to go til then, and besides, “most of today’s systems will be replaced by then“.

In any case, I’m not the least worried. I anticipate that my personal operations will terminate before then.

Beam me up, Scotty. There’s no intelligent life down here.