Thursday 26 November 2009

Scraping the bottom of the barrel

What is it that makes a barrel? Is it the wood, the staves? The metal hoops? The shape? Of course, all of those things are vital but they are not what make a barrel. Interestingly, when you stop to think about it, the thing that actually makes a barrel is the thing that isn’t there – the void inside it. If the barrel wasn’t shaped to contain a void it would be useless; it wouldn’t be a barrel. Likewise, an empty barrel serves no purpose – it is just wood containing a space until you fill that space with something. Then it becomes useful and serves its purpose. The barrel itself, if it is constructed correctly and doesn’t leak, then becomes of secondary importance – what matters is what it contains.

What does all this have to do with training? Well, training is the equivalent of the wood and metal in the barrel; fitted together correctly they contain a void. In the same way that the usefulness of barrel is the void it contains, the usefulness of training and development are the void that they contain – the practical application of what delegates learn back at the workplace. Without that practical application, training workshops or programmes are like empty barrels – pretty to look at, perhaps, but serving no useful purpose and just taking up space.

So what makes training useful is the application. This is an interesting way of looking at the issue and should, perhaps, make those who commission training think more about the application of what delegates learn. However, if this is also the mindset of the development consultant, then it will drive a new set of behaviours.

Just as barrels have evolved into more elaborate and efficient packaging solutions, so too must training evolve. When development consultants are constructing their barrel, they should be thinking very carefully about the space they are seeking to contain and how best to surround that space to the greatest effect. In effect, thinking first about the application of the learning before constructing the workshop to teach that learning. Different shapes require different packages; different applications will require different methods. This focus on application should keep both consultant and commissioner focused on the real purpose of training – to use what you have learned. Sadly, too many training companies are fashioning beautiful and elaborate barrels which remain empty and, therefore, useless.

Thursday 19 November 2009

Learning to Lead

If you’ve ever spent time in a training room, you’ll have heard a trainer use the phrase “there’s no such thing as a stupid question.” I know it’s supposed to be supportive and encouraging but now and again I like to take it as a challenge and see if I can’t find some really stupid questions to ask. You know the sort – the kind of questions that five year olds ask and which parents find so difficult to answer: things like “why is the sky blue?” or “where does the sun go at night” or “is it actually possible to teach someone to be a leader?”

Many years ago, people who thought about this type of thing believed that leaders were born, not made. Leadership was a quality you were born with and the idea was known as the “great man” theory. The difficulty with this theory (leaving aside the obvious sexism) is that, followed to its natural conclusion, if you were born with this leadership quality you’d be a leader even if you never got out of bed. That led to a second series of ideas (known as behavioural theories) that involved what leaders actually did. Of course, anyone who’s been a leader knows that what you do usually depends on the circumstances, which led to a whole new set of ideas, known as contingency (or, “it depends”) theories.

Since the 1990s, leadership theory has fractured into a host of different schools: exchange and path led; charismatic and visionary; transformational; post-transformational, distributed and on and on. However, after people moved away from the “great man” theories, the idea that leadership could actually be taught was never much questioned: leadership was reduced to a series of tasks or activities, leading to the belief that leadership itself could be taught. But what if it can’t?

This is obviously a question that people in my position don’t really like to ask very often – after all, pretty much everything we do is predicated on the belief that it can. But I suspect that there is actually very little – including leadership – that can be taught. Instead, these things have to be learned.

That’s not just semantics. All learning involves change and psychologists say that in order to change, we need three things:

  • understanding (knowing and appreciating the need to change);
  • motivation (the desire to change);
  • resources (the tools or environment to help them change).

As a trainer, I can only provide some of the resources and perhaps help with some of the understanding. The rest has to come from the individual. I was struck by this as I read a very interesting essay on leadership by Elena Antonacopoulou and Regina Bento; their assertion is that the most important thing leaders can learn is not how to create a vision, or to communicate or how to build trust. Instead, the best thing that leaders can learn to do is learn. I think they’re onto something.

Wednesday 11 November 2009

Working to live - part three

I ended last week’s blog by suggesting that Frederick Taylor was a fraud. Rather than rehash here all the reasons why that may be true, I’ll refer you instead to an excellent article that Matthew Stewart wrote for the Atlantic Magazine and recommend you read that. What I’d like to concentrate on here are some of the consequences of Taylorism.

Regardless of Taylor’s methods, there is nothing inherently wrong with a drive for efficiency. Everything we do, both inside and outside work, takes a certain amount of time. The principle underlying Taylorism is not necessarily fraudulent - business must involve, to an extent, the search for the shortest time period within which the widget can be made, how quickly the client can be served and so on. My concern is not with that but with the other question that no one seems to be asking: what is the consequence of this efficiency?

The superficial response is that greater efficiency results in faster throughput and therefore greater output and productivity; it may also result in reduced costs and greater profit. So far, so good. This is a logical argument when we’re talking about machinery and possibly even production lines. It even has merit when talking about the everyday processes that employees use in order to get their jobs done: the fewer steps in the process, the faster they are able to get their work done.

But what about the other consequences of greater efficiency? If you’re wondering what they are, ask yourself this question: whenever new processes are introduced at work and time is saved, what does your employer ask you to do with the saved time? Do they allow you to go home early? Have longer lunch-breaks? Or do they, as I suspect, expect you to do more work in that saved time?

What kind of incentive is this? Who in their right mind (aside, of course, from Frederick Winslow Taylor and his deluded devotees) thinks this will encourage people to work harder? Efficiency works well with machines but we are not machines. A drive for ever greater efficiency is damaging the lives of a great many employees in fundamental ways, leading to less job satisfaction, greater stress and, as I wrote last week, increased suicide.

Over the coming weeks, I want to look at and, perhaps, challenge two sacred cows – that setting targets and managing to them is a good thing and that management/leadership can be taught. These two beliefs have been at the base of a system that has resulted in people killing themselves because of their work – what if both of those ideas are wrong? Perhaps it’s time to examine them a little more closely.

Sunday 8 November 2009

Remembering

On Remembrance Sunday, we take a little time to remember the people who have sacrificed their lives in conflicts around the world. There are many people who have had a significant impact on our lives and yet about whom we know very little and I was reminded of this recently whilst posting a link to this blog. Before the website allowed me to post the link, it brought up a box containing some words written in very wobbly and indistinct text and asked me to type in what the words were. You’ve probably come across the same thing any number of times on your travels around the internet.

This is known as a CAPTCHA, which is a contrived acronym for Completely Automated Public Turing test to tell Computers and Humans Apart. It’s a quick and easy way of ensuring that the person about to post the link or comment on the blog is a real person and not a computer, which may be trying to spam the site.

The Turing Test was first posited by Alan Turing, the British mathematician and computer scientist during the 1950s. Turing is one of the fathers of the machine you’re using right now and foresaw a time when computers would be able to think for themselves. The Turing Test was to establish whether a human being would be able to tell whether he or she was conversing with another human being or with a soulless computer.

Turing was a brilliant man and contributed significantly to the British war effort through his work as a code-breaker at Bletchley Park; having mastered mathematics, cryptanalysis and logic, Turing successfully turned his hand to chemistry towards the end of his life. But the end of his life came too soon and he died at the age of 41, apparently of suicide. It is believed that Turing first laced with cyanide, and then ate, an apple – urban myth holds that this is where Apple computers got their logo, although the company denies this.

Turing, you see, was gay and British society at that time was bigoted and intolerant: he was convicted of “gross indecency” and forced to take female hormones to chemically castrate him – as a result, he grew breasts. Recently, UK Prime Minister Gordon Brown apologised to Turing, praising his contribution to the war effort and stating “on behalf of the British Government, and all those who live freely thanks to Alan’s work I am very proud to say: we’re sorry, you deserved so much better.”

The Turing Test is a test of humanity. When it came to Turing himself, society failed that test 50 years ago. With this apology, I’d like to believe we’ve finally passed it but there is still bigotry and intolerance in workplaces and society in general: remember that each time you take the Turing Test.

Wednesday 4 November 2009

Working to live - part two

I wrote a few weeks ago about the problems faced by France Telecom and the increase in the suicide rate amongst its workers (you can find the entry here). I can’t claim any credit (much as I’d like to) but the Schumpeter column in The Economist picked up on this story recently and added some worrying statistics to the mix.

America’s Bureau of Labour Statistics has calculated that work related suicides increased by 28% between 2007 and 2008. Think about that for a moment: the number of people who were so unhappy with their work that the only way out was for them to kill themselves increased by more than a quarter in the space of one year – and, in the words of the article, “suicide is only the tip of an iceberg of work-related unhappiness.”

The Centre for Work-Life Policy has found that between June 2007 and December 2008, the number of people who said they were loyal to their employers dropped from 95% to 39%. The number of people who said they trusted their employers fell from 79% to 22%. In other words, if the statistics are to be believed, 75% of people don’t trust their employers and 60% are disloyal or, at best, neutral. It seems that, increasingly, employees are finding themselves trapped in jobs they dislike for employers they distrust.

Unusually for The Economist, the article is deafening in its silence on what should be done about this. Telling managers to think more carefully about what they say or advising workers that longer term demographic trends mean they’ll have the upper hand eventually is, frankly, fatuous. Something has to change and it has to change now.

Much of this unhappiness comes from the drive for efficiency, which I’ve labelled previously as the drive to achieve more with less. This in itself stems from the work of Frederick Taylor, who believed that work could be studied scientifically in order to find the most efficient way of working. In his words, “through enforced standardization of methods, enforced adoption of the best implements and working conditions, and enforced cooperation... faster work can be assured. And the duty of enforcing the adoption of standards and enforcing this cooperation rests with management alone.”

There are two things that I’d like to point out: firstly, Taylor uses the word enforce (or variations of it), five times in two sentences. I don’t think that enforcement is a helpful or effective way of gaining co-operation. Secondly, Taylor – one of the first if not the first management consultant, the father of scientific management and the man whose theories permeate almost every part of business today – was a bit of a fraud.

I’ll be developing these ideas further over the coming weeks in a series of articles that challenge some of the sacred cows of business and I’d love to know your opinion; please do sign up, post comments and get involved in the debate.