Problem-solving versus … what?

15 09 2013

A minor epiphany, but a light has certainly gone on for me. It relates to the Birkman Life Style Grid (see model below) which we use in order to rapidly aggregate a lot of data in a single synoptic view.

Model of Birkman Life Style Grid

Over the years I have dealt with a number of teams who lie predominantly along the Red-Blue axis, but who are considerably weaker in the Green and Yellow Quadrants. I have tended to categorise these teams as Problem-Solvers, because that tends to be where they shine. Blue-oriented people frame Problems in creative and innovative new ways, and suggest new solutions to old problems; Red-oriented people get on with execution, now. By themselves, Blues have great ideas that never happen, and Reds do brilliantly well that which never should have been done at all. Together, we get great ideas, brilliantly executed. Hence, problem solvers.

So what about the other Axis? Greens and Yellows are potentially just as alien to each other as Blues and Reds. Greens seize opportunities and sell; Yellows set up systems and measure and analyse. They can drive each other mad, but what do you get if you combine them successfully?

Business. Green-Yellow is the Business Axis. Blue-Red is where your products and services come from. Green-Yellow gives you a business and keeps you in business. Haven’t looked at it for several years, but for example, Michael Gerber’s “The E-Myth Revisited” presents a true  business as a predictable repeatable process for producing money (Green-Yellow) and not a context for creatively doing (Blue Red).

Worth thinking about.

The Cybernetic Organisation

15 11 2010

Still with Kevin Kelly and Out of Control, I have been reflecting on Norbert Weiner, author of the 1948 book Cybernetics. Weiner provided a simple but profound insight that changed the face of engineering for ever.

One specific problem Weiner addressed was that of producing uniform sheet steel when doing so was notoriously difficult. Rolling mill operators had to contend with at least half a dozen factors, any one of which could affect the thickness of the finished coil of steel: temperature, quality of the raw steel, contamination of the rollers, pressure and so on. What made it harder was that these factors tended to be interconnected: for example, a change to the setting that regulated the pressure of the rollers would tend to raise or lower the temperature of the steel as well.

Weiner’s brilliant insight was that if you placed a sensor (to measure thickness of the steel very accurately) immediately after the final roller, and allowed the data from that sensor to control just the final factor in the chain of causality (in this case the pressure of the rollers), then whatever the other factors were doing, the steel would come out at the right thickness. The adjustements to the pressure setting were automatically taking account of the sum of all the other factors (since they all played a part in the final thickness). Brilliant and counter -intuitive. It changed how tightly connected systems are controlled ever since.

As Kelly points out, this insight had actually already surfaced in the field of economics a quarter century earlier (suggesting that centrally controlled command economies such as Lenin’s Russia would never be as efficient as an economy in which price was allowed to control all other economic decisons).

Has the Cybernetic insight reached your Organisation yet? I don’t mean in the way you roll steel necessarily; fewer and fewer of us are engaged in steel rolling. (And I speak as one who did a project once in a Chinese Steel Mill with 440,000 employees! Those were the days…)

What I mean is that in running your organisation (whether business or government department or not-for-profit), you are probably juggling all kinds of factors and – let’s be honest – experiencing the law of unintended consequences more often than you care to admit. You make a brilliant change to factor x, suddenly factors y and z are killing you. What to do? Smile and wave and carry on?

Here is a thought experiment to follow on from last week’s one: what is the output sensor for the product or service you deliver to your customers or clients? Probably the experience of those customers or clients. (How you capture, accurately, that experience is a whole other discussion).

What if you coupled that sensor to the last factor in the “production line” – what would that be? Probably the people who deliver your product or service to your customers – whether they are consultants or coffee baristas. What would happen if you allowed customer feedback to determine how your customer service was set? That might mean moderating the type of person delivering that service, and how they behave, scripts they use, the manner in which they deploy or deliver the product or service and so on.

We aren’t rolling steel here, so it may be a bit more involved than customer service mitigating the effects of bits of slag on the steel coil. Customer service people might actually need to inform how your product or service is designed and built. What the customer wants might need to flow right back up through the organisation. It would also be important that you didn’t chunk that information off into projects which would proceed outside the real-time feedback web, because those would then tend to act as increased input errors rather than real-time error correction. In such a Cybernetic Organisation, what Customers are telling us about our Customer Service today would need to be the most important thing anyone in the organisation could hear – because it would determine what we do today. Too crazy for you?

Benchmarking Talent is not Dolly the Sheep

25 10 2010

I talk (tweet/blog/rant/bore) a fair amount about the value of benchmarking star performance in specific roles and then using the resulting profile in recruitment. I don’t recommend this for all recruitment, only where there is a significant number of people who all perform identical roles. Often this means customer facing roles such as customer service, account manager, etc, but it can also apply to backroom admin or technical jobs. If you have a statistically significant sample for us to work from, we should be able to tell if there are characteristics which discriminate consistently between your star performers and the merely averagely good; if there are, you can use those in recruitment.

It occurred to me the other day that this might sound as though I believe in human cloning, or at least that my ideal call centre would comprise entirely of balding thirty-year old men called Clarence and who all wear black-rimmed glasses and keep their pencil behind their right ear. Nothing could be further from the truth.

My experience of workgroups with unusually high numbers of high-performing people is that they look to the casual observer – and even to the workgroup members themselves – as very diverse. Ask them and they will say, “oh yes, Sue is our extrovert and Fred is the quiet one and Bill goes wave-boarding and Nancy is studying astrophysics in her spare time.” Even if we ask them, what do you think you all have in common, they may be unable to answer – simply because whatever the “magic ingredient” is, it is so much part of their whole world view that they have no idea it isn’t shared by everybody on the planet. Could be obsessive attention to detail, could be absolute determination to leave every customer feeling that their most pressing problem has been solved – whatever.  So in my experience, a truly star workgroup has usually been a very diverse group of people with just one or two very specific characteristics in common – not a flock of Dolly the Sheeps. For example, take the accounting firm where two thirds of the accounting partners and staff were most highly motivated not by a love of working with numbers but by a love of making a difference for other people. How proactive do you think they were in coming up with ideas to improve their clients’ businesses? (Answer: very). Or the local council call centre with very high scores on empathy for others and practical action. Do you think they were happy to send people off down a bureacratic rabbit hole? (Answer: no).

And all this is why we use the Birkman Method, rather than any other tool. Firstly, it allows direct comparison between individuals (which, you may be surprised to know, most profiling tools are not designed or calibrated for); and secondly it provides the most finegrained data we know of, with a wealth of individual and independent scales. This isn’t “are you one of these or one of those”; rather it allows us to ask intelligently of individuals, what makes you unique; and of clusters of individuals (e.g. star performers) what do you diverse people have in common that your colleagues don’t have?

So benchmarking, done right is powerful stuff. And no hint of “Hello Dolly!”

25 yards of Team Building, please: changing how organisations buy assessments

18 10 2010

Talking to a friend and colleague from the US last week, he commented on the way large organisations buy assessments. Actually, the give away was the term “assessment buyers”… like “media buyer” or “office-supplies buyer”. Essentially the mindset seems to be almost along the lines of, “senior management expects us to use assessments, we buy x assessments per y personnel, check the box on the quarterly return, job done.”

Someone somewhere in the organisation is trying to accomplish something worthwhile with those assessments – better recruitment, better management of talent, building a team sorting out a workplace problem, whatever. They get to use whatever the assessment buyer buys (or specifies) for them. Fair enough, as far as it goes; that is how things work in a large organisation.

But here is the huge lost opportunity: a) why on earth is the assessment buyer treating the assessment of the company’s most valuable asset as it they were buying pencils and ignoring b) the opportunity to build over time a valuable map of the organisation’s talent and strengths? What do I mean?

Piecemeal assessment means that even if a great tool has been deployed to solve an important problem, that is the end of the story. A one-off purchase for a one-time return. Next time some or all of those people are involved in a situation where assessment needs to be used, either the same tool will be deployed again, or a new one; but either way, there will have been little if any value carried forward from the previous episode (except, usually, some increase in the employee cynicism triggered whenever there is a lack of joined up thinking: “here we go again”).

The positive alternative is to seek out and deploy a tool or suite of complementary tools which can be used across all situations (recruitment, appraisal, promotion, career development, coaching etc), and to keep coming back to the data collected already, both in the sense of “deploying once and using often per employee”; and in terms of watching trends over time, planning change programmes, post M&A integration, strategy, whatever. You won’t have a picture of your whole organisation the first time you deploy your selected tool or suite of tools for a 15-person team-building event; but you might be surprised how quickly you start seeing a map of your whole organisation come together, with key cultural or behavioural themes emerging. To senior management (remember them? we mentioned them in the first paragraph) that kind of data is solid gold. The mid-level manager achieves their immediate goal – but the whole organisation benefits as well.

Can you do this with every tool that is out there? Sadly not. Here is a short checklist of assessment properties you need to be looking for:

  • Stability of data over time for the individual (if the same person completing the assessment next week will come out significantly differently to how they did last week, forget it). 3-5 years plus should be a minimum if you intend to use this to build a picture of the organisation .
  • The tool needs to allow for accurate comparison between individuals. This may seem obvious, but very many of the well-known tools don’t do this.
  • Ideally, use a tool primarily designed for, and proven through, use in organisations. A tool developed for a PhD thesis, using a group of undergrad students as the survey sample, may not tell you something useful in an organisation setting.
  • Choose empirically-based assessments (i.e. based on research that establishes a relationship between actual behaviours and how people show up in the assessment) over theory-driven assessments (i.e. tools that categorise people according to a theoretical model) – unless you are prepared to stake your success on the particular theoretical construct involved.
  • Look for a report establishing reliability and validity for the instrument and check that it compares well to alternatives. And always ask yourself if you are seeing a great tool – or just great marketing.

Cherished Core Value? Or Flimsy Excuse?

9 09 2010

I was going to write something about putting sacred cows on the barbeque, but realised this was inappropriate not just because of its potential to offend those for whom “sacred cow” isn’t just a handy metaphor, but also speaking as someone who has lost 23kg in the past 15 months by following a plant-based (and entirely cow-free) diet. (Read The China Study by Colin Campbell and visit John McDougall’s site to find out more).

So I am stuck for a punchy metaphor but here is the thing: I keep running into cherished core values in organisations which have in fact become their excuse for poor performance, failure to change, bizarre structures, failure to confront, etc etc etc.

So – how do you work out when (sorry – can’t help myself) that hitherto cherished core value should actually be tossed on the barbeque?

It is not that hard if you go back to basics. “What is our mission and who do we serve?” When that is clear, simply look at the “core value” in question in the cold light of day and ask yourselves, “how does [core value] help us AS AN ORGANISATION to deliver on our mission to the people we serve?” Arrive at anything less than “this positively and practically helps us to deliver on the mission to the people we serve, far outweighing any downside” (with concrete examples of both the pluses and minuses) and you should be pouring lighter fluid on the charcoal briquettes.

For the avoidance of doubt, I am not talking about “our people matter” or similar here. If you have that as a core value – and you act consistently upon it – that will help to deliver any mission you care to name. How you act on that core value does of course matter;  “our people matter so we never confront them or never make any changes to the location of their cheese” is doubletalk that will ultimately doom your people and your organisation!