Skip to content

Use 1970′s technology to improve your plant’s performance

September 27, 2012

One of the challenges AT&T faced in the 1970’s was the problem of knowing when to send a technician out to fix a problem with a telephone line. This challenge emerged as businesses began taking advantage of the analog phone lines to send data. While a voice call could withstand reasonable signal degradation, data was much more sensitive to signal quality. Some problems were easy to identify, like when the line was broken due to a pole being knocked down. But there were dozens of other reasons that signal quality could drop without completely failing such as a failing diode or transistor, or electrical interference due to incorrectly routed wiring, or a faulty ground system. Any of these and other problems could crop up at any time due to the constant aging, upgrades and expansion of the telephone system.

This was an important problem to solve. This was an important new revenue source for AT&T and it justified its monopoly at the time by arguing that only one company could keep all the technology working together at a reasonable price with the high level of voice and data quality that AT&T provided. But investing in aggressive line maintenance would mean rising labor costs. The alternative and inexpensive method of waiting for the customer to let AT&T know about a problem would reduce revenue and result in a push for the deregulation of a profitable monopoly.

When I first heard about this problem from a retired Bell Labs (AT&T’s R&D division) engineer, it sounded very similar to the labor problem that companies face today. Companies compete based on delivering high quality products and services at a competitive price. The ongoing challenge is delivering these products and services through an increasingly complex set of events working in harmony.

My first thought was that manufacturing companies avoid this problem entirely in a way that AT&T couldn’t. Companies today add buffers to cover up small quality problems. Excess WIP and finished goods inventory are increased just in case raw materials are late. Lead times are quoted that are longer than truly necessary in case machines break down or people don’t show up. Premium freight is used as the final back-up. Unfortunately, the phone company didn’t have the luxury of back-up systems. It was running a real-time service and only had one shot at getting it right because there was only one set of copper wires that ran from the switch to the customer’s phone. They couldn’t afford to run a “just in case” pair of wires to every phone that was going to be attached to a modem.

What struck me next is that if the phone company was able to keep a complex, real-time system operating with no back-up in the 70’s, what could we learn from that today? There might be new opportunities for companies to use those same techniques and shrink their buffers. This would of course result in lower costs and shorter lead times.

This was not an easy problem to solve for Bell Labs. Millions of different wires had to be monitored constantly and dozens of things might go wrong at any time. But instead of the traditional manufacturing approach of measuring all the things they knew typically went wrong, Bell Labs turned the problem around. Its engineers knew what the electrical signal of a perfect call looked like and it knew the limits at which point data would be garbled. Why not look for signals that were not perfect (I’ll wave my hands like it was easy, but they used some sophisticated statistical analysis to determine this). An “off” signal would indicate that data integrity was at the edge of degrading. This technique provided the technicians time to diagnose the problem and fix it before the customer noticed it.

So if we were to turn to my favorite topic, labor management, how could this be applied?

Historically and for many today in manufacturing, we report what happens at the end of the shift in terms of production, quality and performance. We might also measure and report on common problems such as overtime, unplanned absence, low performance and machine downtime. But this information is all history at this point. Sure we can address some of these when they start trending, but if we are going to start using statistics anyway, might there be a better way?

Best practice today is to measure some or all of these factors in real time so that they can be addressed as they occur, minimizing the disruption. We can also take that data and analyze the trends to make process improvements.

But that is not really taking advantage of what Bell Labs did to address their similar problem. We need to think of a way to point to a problem that is emerging and give supervisors more time to react. We need to provide new supervisors with instant experience about a production line.

I’m thinking about something like this:

We create a labor schedule for the next week. We have lots of information at our fingertips about the people on this schedule and the performance of this line. We know their tenure, if they are contract employees. We have records about their propensity for unplanned absence, and if that trends to specific days or times of the year. We understand their safety records. We might also know their historical performance on this line or operation and the level of quality they produce.

We know what a perfect day looks like in production and we can then use statistical analysis similar to AT&T to notify us when it looks like that won’t happen. This will focus supervisors on the highest risk lines and give them time to do something about the problem before it happens.

I’ll take the simplest example that I still hear about on a regular basis on every continent I have visited:

“We scheduled too many in-experienced people and the line ran slow.”

This is an entirely predictable situation that can be addressed long before the shift starts. It might not be just “green” employees. You could be scheduling a line that has a high likelihood of a “no show” tomorrow. It might be a combination of people and a specific type of product. What other combinations of employee attribute and personal history are going to cause production issues during that shift? Statistical analysis of a schedule will greatly improve your odds of knowing about a problem before the shift starts. This will give employees time to reduce the risk before costs and delays start piling up.

20 years ago we looked at past performance. Today we measure it in real time. Companies that want to gain the next competitive advantage through labor productivity will start using the data they already have to predict tomorrow morning’s performance and do something about tonight.

About these ads
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 229 other followers

%d bloggers like this: