Corporate Universities – A Case Against

There are lots of versions of corporate universities.  I actually worked with one client (Amtrak) who’s senior leadership decided they needed to have an Amtrak University.  The senior training leader–when it was clear she couldn’t fight this decision–simply relabeled the existing training and development  resources a “University” and set up a “campus” at an existing training site in Wilmington, Delaware.  Nothing else changed (in terms of content offerings, structure, focus) but hey….they were a University now.

I’m sure not all organizations that adopt a “university model” follow that approach.  But I”m skeptical of the value of adopting a Corporate University approach to learning, development and performance within organizations.  And I’m not the only one who holds this belief.  The esteemed Ruth Colvin Clark has noted some similar issues around a move to corporate universities.

I think there is a tendency to assume that a “university” has more prestige and functions at a higher level than a training department–and that this is therefore a good thing.  We (meaning people in general) often view the term “university” positively and assume that the work is more rigorous (rather than the converse of referring to your organization’s L&D shop as a “kindergarten”).

But this focus on making or branding an organization’s learning and development shop as a “university” seems to me to be mis-guided.  First, it places the emphasis on education versus performance.  The primary reason for learning and development in most organizations should be to improve performance.  That’s why most training evaluation measures (looking at reaction to the training or even if learning took place) don’t seem very relevant to me.  I can enjoy the training or even learn a lot yet fail to get better at my job.  When the focus is on learning rather than performance, it’s too easy for learning/training professionals to be unaccountable for results (“don’t blame me why results didn’t get better–the participants enjoyed the class!”).

Additionally, I’m not sure Universities provide a great model to help guide training and development functions.  While there are plenty of great examples of innovate learning approaches with higher education, most educators would say that the majority of universities still operate with very traditional models and approaches to teaching and the organization of knowledge.

Plus, the way that a number of organizations have treated the creation of a corporate university was with the centralization of the learning function (to create a “campus”).  The irony with this approach is that one of the better examples of innovation with many schools of high learning has been the decentralization of learning–moving out to the field, off the campus, away from a central-visible school.

I would argue that many organizations who’ve adopted a university model have done so either to “keep up with the Joneses” (i.e. seeing it as a trend they need to follow) or as a way to enhance the prestige of the training department.  I think a far better way to enhance prestige of L&D is to demonstrate a strong track record for being focused on and effectively building performance.

Top Performers and Behavior – Getting the Focus Right

Authors, consultants and organizations have started to pay more attention to the value of exemplary employees.  Some of this is probably due to competitive pressures, some is probably due to work like Malcom Gladwell’s “Outliers”.  Unfortunately, a great deal of the focus on exemplars has been–from my perspective–misguided.

There is a tendency to view exemplars in terms of their personalities.  So we describe them by their attitude (refusal to quit, optimistic) or the personality trait (inquisitive, creative).  And of course the problem with doing so that personality doesn’t make a superb perform.  There may be traits and attitudes that are valuable to have.  Â But what performance analysis tells us is that you can’t just parachute someone with particular traits into a particular job and automatically get success.  And that two performers with very different personalities can achieve similar outstanding results (especially with white collar work).

It’s not that we should ignore personality or attitude.  It’s that we should start first by looking at outcomes, identifying what produces that outcome for the specific job and then looking at how behavior or attitude or personality may help influence performance on a specific task.  Otherwise, we devise personality assessments that tell us to screen for particular personalities and ignore the reality that people can have similar personalities yet be radically different in terms of results.

Thomas Gilbert covered this issue extensively in his book Human Competence: Â Engineering Worthy Performance.  Â Gilbert talked about the fallacy of the “cult of behavior” in which people assume that if only workers would behave a particular way, than great performance would result.

If people focus on exemplars but continue to emphasize behavior or personality as the key elements, then we will have no advanced performance. ا Gilbert’s insights are still true today.

Why is Service so Bad?

Customers like to kvetch a lot and so it’s easy to complain about missing the “good old days” when a lot of times the old days weren’t so good with no vaccine for polio, 1 in 2 children dying before the age of 10, maybe a world war going on with millions dying, being born lower class where your chances of going to college were nonexistent or living in a time where there was no such thing as an iPod.  Living in the past wasn’t always better.

But lots of people (me being one) feel that overall service performance is getting worse.  That’s not just generational narcissism or  curmudgeonly attitudes that come from an aging group of baby boomers.  I do a lot of client work around service issues and customers experience and that’s my take.  And Bloomberg and JD Power collect data on overall service and that’s their take too–overall service performance is getting worse.  Oh, there are exceptions–firms that continue to raise the bar.  But overall, most firms seem to be doing a worse job serving customers and creating distinctive experiences that provide a competitive edge.  How is that so when so many firms pay lipservice and actually spend a lot of bucks on supposedly improving service.

I’d argue there are a couple of factors:

1.  The economy has certainly had some impact on this issue.  As firms have laid off people, some work simply is not going to get done (or won’t be done with the same degree of detail or consistency).  When you compete on price (where customers have no loyalty) then service standards tend to be evaluated solely on the basis of cost (ie: “Is there a cheaper way to do this?  What can we stop doing?”).  But the economy isn’t the sole culprit because data on dropping service levels showed up in many sectors prior to the global recession.

2.  Too many firms don’t evaluate service from the customer’s perspective.  We hear hoary extortions like “the customer is always right” or “under promise and over deliver” which are actually bad service mantras to live by.  Firms that don’t provide service guarantees usually do so on the belief that customers would rip them off (when data continuously shows this not to be the case).  To many firms define good customer service on the basis of a set of association behaviors (smile–be friendly, etc.) that are nice but usually don’t matter if a host of other service issues are present before the company associate ever comes into contact with the customer.  Issues like the customer’s expectations (realistic or off-base)  and the company’s reputation (deserved or unfair) have far more impact than smiling, being friendly, listening well and being prompt (or other behavioral tactics).  Related to this, too many firms view good customer service as a set of employee behaviors–not a performance issue.  That’s unfortunate because when we make it all about behavior, we set up a series of targets that are moving, nebulous and difficult for employees to meet (so we breed cynicism and ultimately failure).  The outcomes we want from good service are not friendly staff or smiling desk clerks.  We want an outcome of customers who feel welcomed and respected–it’s about the customer’s perspective not the employee’s behavior.  As more firms attempt to improve service, there seems (at least to me) to be more focus on behavior which of course runs counter to how performance works (which is starting with outcomes and working our way back).

3.  Absence of standards is a huge factor with service failure.  When you define good customer service as a set of behaviors (like a “friendly smile”) it makes it difficult (though not impossible) to measure performance objectively.  Check with any five-star hotel property around the world and they have hundreds or thousands of performance standards.  Some of them are behavioral or appearance-based.  But many involve specific outputs (what a clean sink is supposed to look like) that allow for consistent, objective measurable data that can be used to track performance and assess progress.  Show a business with consistently good service and I’ll show you one with explicit standards to measure service against.  For too many firms, identifying and codifying and then measuring standards is just too much work.  So they just tell employees to go out and “wow” customers.

So a better economy might help improve service somewhat. Â But ultimately service problems in the West are based on a fundamental misunderstanding of customer service and performance.

One of My Favorite Websites

As the content (and garbage) on the web continues to proliferate, it’s sometimes hard to sort through the gold nuggets from the chaff (or the garbage).  This is especially true in the performance arena.  There is one particular site that has been up a while (“a while” in this case means since 1995).  It’s the work of consultant Don Clark.  Don has produced a true labor of love that everyone in the workplace learning and performance fields needs to be aware of.  With a background in the Army and then Starbucks before he set off on his own, Don decided to create a site not to promote himself but really cover a wide range of ISD, training, OD, performance, management and programmed learning content.  He’s got a variety of self-created templates, forms and manuals you can download on topics like ISD or task analysis.  He provides a list of HRD names and why they matter, books that are important, timelines for particular topics, relevant quotes and more.  But mostly the “more” is about tools and examples and content around how to do what it is that we do–more intelligently and effectively.  And the site is clearly designed to share knowledge, not for self-promotion or profit.  Frankly, I cannot think of a single person in the workplace learning and performance field who has been so prolific on their website in terms of content.

The primary topic headings off the main page are:  leadership, training, learning, history, knowledge, performance, java, news and his blog.  And under each of these topics, you’ve got a wealth of depth (in some cases over 100 individual pages of content in terms of a user’s manual or separate job aids).  Quite simply, there is a tremendous amount of eclectic depth and breadth on this site.  In the few exchanges I’ve had with Don in the past, he’s proven to be trusting, magnanimous and helpful and easy to deal with.   I once wanted to use some of his material for some University Professors in Central Asia and instead of providing a lot of hoops for me to jump through, really made it easy to move forward with his stuff.

I’m going to list the URL in just a few lines but I do so with the caveat–I think the URL has changed a few times over the 15+ years that Don has has this site up (and he continues to add to it).  So if for some reason the URL doesn’t work (which could be due to my error or a change on his part), I’ve always found it by searching for “Big Dog’s bowl of biscuits” (certainly a memorable phrase).  And if you go to the website and look at “about” you’ll see pictures of “big dog” and “little dog” with an explanation that will probably draw a chuckle from you and also just drive home how amazing this site is–that Don is clearly doing this out of a desire to help the field and share knowledge, not profit individually or market himself.

The most recent URL that got me to Don’s site is:  www.nwlink.com/~donclark/ and if you haven’t visited the site before, I strongly suggest you do.  Don–keep up the great work!

Blindspots Revisited

Some of you may recall a previous blog post I did on Blindspots (“Understanding Blindspots”).  A quick refresher about that concept before I take another crack about that topic—we have areas of ignorance—things we don’t know but usually we’re it’s a weakness or deficiency.  For instance, I know nothing about horse riding or dressage–I’m aware that is an area of ignorance for me.  Then we have blindspots—areas we not only don’t know about but we don’t know that we don’t know.  In other words, blindspots are particularly dangerous because unlike an area of ignorance (where we might tread lightly or avoid because we know it’s a weakness or we’re cautious), blindspots typically involve overconfidence.  Individuals can have blindspots and organizations can as well—in fact, most examples of military or intelligence failures involve blindspots.

I wanted to revisit this topic because I’ve been working with two recent clients on their strategy, plans and high-level goals.  One client is in the US intelligence community and another is in the private sector (plus plays in the national security space).  A key part of both pieces of work has involved identifying the collective blindspots within each organization.   While I’ve done work like this plenty of times before in my career, it’s always fascinating to see what emerges as a blindspot within the client organization.

Both clients have bought into the value of identifying what their blindspots are.  Only one of the two though has really committed to any action to then deal with those blindspots (other than going “yep—that’s spot on!” and then ignoring it).  At least by publicizing it and talking about it, we have a chance of mitigating a little bit of the blindspot—just turning it into an area of ignorance—perhaps!

How do you spot blindspots?  There are a number of techniques.  One is too look at what doesn’t get talked about in the organization or what isn’t funded.  Â While that’s not a full-proof way of identifying a blindspot (sometimes something doesn’t get talked about because it isn’t important!),  it’s a good starting point.  Another is to look what blindspots the organization had in the past and then test to see if those conditions have changed.  A third approach is to identify critical assumptions the organization or leadership is making.  Assumptions aren’t bad—we have to make them all the time.  But most people make assumptions and aren’t aware we’re doing so.  Â It’s either unconscious or we consider them to be “facts.”  A fourth approach is to identify the mental models that the leaders and organization share (mental models and assessing them is a topic for another blog post!).  Degree of confidence on particular issues is also a clue as to potential blindspots—issues that an organization has had success with in the past and is confident that “we have this nailed” often forecast a cockiness and a failure to look for disconfirming information. Finally, organizational culture (if there is a strong, cohesive, dominant culture within the organization—and usually there isn’t, usually it’s a series of subcultures) can be a clue about blindspots.

Gulf Oil and Performance Lessons

With some labeling the BP oil spill in the Gulf of Mexico as the worst environmental disaster the USA has ever experienced, it’s worth looking at what we know so far about efforts to deal with the spill for performance improvement lessons.  As I look at what I’ve heard about this disaster, several critical lessons come to my mind.

  1. Ignore process at your own peril. Â There has been such an emphasis on “action” and “leadership” (both by private and public sector organizations) that we’ve seen lots of money, people and activity–but often at cross-purposes. Â Throwing money and resources at any problem is usually ineffective when there is no clear alignment around the process connecting all of the specific tasks.
  2. It’s a lot easier to prevent a problem than to fix a mistake.  The Gulf Oil spill illustrates this point so well–far better and easier to prevent the rig blowout than to clean up tar balls from beaches and try to bathe birds.
  3. Being clear about the desired outcome is critical. ا Those of you knowledgeable about performance improvement know how critical outcomes are as a means of providing direction.   Unfortunately, everyone assumed there was a clear purpose (clean up the spill) when actually there was tremendous disagreement among directions.  Some groups argued for booms to corral the oil (which doesn’t address oil beneath the surface).  Others argued for strong use of chemicals to eat the oil or break it down (which was opposed by others who felt this could produce worse environmental impacts than the oil itself).  The disagreements were more than just differences on tactics but instead reflected major (and often incompatible) directions.
  4. Data matters.  Throughout the first month of the disaster, there was a consistent inability to answer some of the most basic questions like:  approximately how much oil is escaping daily, what backup or contingency plans are reasonable if the first cap fails, what are the environmental impact of the oil dispersants being used, and what percentage of the oil is remaining beneath the surface?  Without some kind of data, policy decisions were being made on the basis of educated guesses and anecdotes.

What other performance insights have you gotten from this mess?

Understanding Blindspots

I’ve been doing work on strategic and strategic planning with a number of different clients lately and it’s gotten me thinking about the issue of blindspots. There are things that we know to be true (or we suspect them to be so). I don’t mean dogma or blind faith, but rather through data, research, experience, customer feedback, measuring performance—there are some things that we can confidently say “this is something that we know to be true or accurate.”

Then we have areas that we know we don’t know. For instance, I know that I’m pretty uninformed about the tax code. Because of my awareness of my ignorance, I can make smarter decisions about taxes—by hiring an accountant. Or being especially careful when I fill out my taxes each year. bwin

The reality is that no person or organization can know everything. So ignorance about particular topics or situations is a reality of being in the world.

But a blindspot occurs when a person or organization is ignorant about a situation and doesn’t realize the ignorance exists.  It may be due to dogma. It may be because the situation has changed—what used to be true no longer is but people haven’t recognized that. It may be due to a lack of depth—someone doesn’t realize the degree of complexity to a particular issue. In short, a blindspot is a case where we don’t know that we don’t know something.

Blindspots are particularly damaging to organizations. That’s because most big surprises (especially environmental or market ones) to organizations tend to occur because of a collection blindspot that meant the organization and executives simply failed to perceive the potential for surprise with that specific issue.

Are You Measuring the Right Thing?

You’ve probably all heard of the phrase “what gets measured gets done” and certainly organizations are paying increasing lip service to the concept of measuring performance more. This post is not an argument for not measuring. It’s a lesson about the importance of measuring the right things.

A number of years ago, I was called in to help a call center improve their performance. This call center was a 1-800 “help” provider—you called them when a particular appliance stopped working and you needed immediate help or troubleshooting (from simple steps to fix the problem to where to take it to get repaired to what your warranty did and did not cover). Thus, when customers called this center, it was almost always because something was broken—and often with catastrophic consequences.

The call center management team specifically asked me to find ways to reduce the amount of “hold time” that individuals had to wait before getting an associate to help them online and also reduce the average length of the calls (with the theory being that shorter calls would also means less wait time). And, as a “ps” the management team asked me to also take a look at a call center associate named Martha. Martha, they said, was a really sweet person but if she didn’t turn things around, they would have to fire her. Specifically, they said she was too informal with callers (often times not referring to them as “Mister” or “Ms”).  And her average call length was longer than the majority of other associates in the call center. Now it’s worth noting that the vast majority of call centers do measure things like….average wait time and call length and whether or not associates follow the script—that’s pretty standard for the industry. Continue reading “Are You Measuring the Right Thing?”

Improving Performance Doesn’t Mean You Do Performance Improvement

Okay, I’ve got a pet peeve—something that really pushes my buttons.  The data from a host of sources has continually shown that organizations and executives are placing more emphasis on “performance.”  Leave aside the reality that many of them (organizations and execs) don’t really know what performance is in this case (below the organization level of profits or sales or end results).  Â But almost everyone in the HR field therefore knows there is more emphasis on “performance.”

So part of what we see is for people (internally as well as external consultants) to tack the word “performance” on to what they do.   We see “performance-based training” or “performance-enhancing facilitation” or “performance-driven HR” or some other variation.  To me, this reveals a fundamental misunderstanding of the performance improvement field. Continue reading “Improving Performance Doesn’t Mean You Do Performance Improvement”

Performance – And Performance Appraisals

Intellectually, everyone gets the value of performance appraisals.  Yet every client I’ve ever encountered usually bemoans the process and most employees criticize the appraisals.  Why is something that should have so much value end up being so belittled?

Organizations do lots of things wrong when it comes to reviews.  There is a tendency to spring the final evals on employees as a surprise.  I have lost count of the number of people who told me that they came out of their appraisal session in shock—having heard things they didn’t expect.  One basic rule of the formal appraisal is that nothing in that session should come as a surprise to the employee—it’s just a formal meeting to review and sign-off on informal coaching and counseling that went on earlier during the year.  Another issue is the tendency for managers to put off appraisals until the last possible moment.  There are lots of reasons this happens.  In some cases, it’s about avoiding unpleasantness or confrontation.  In others, it’s because it’s a hassle to do the appraisal paperwork and prepare for it—often because the criteria are so subjective.  Continue reading “Performance – And Performance Appraisals”