Home

Acquiring New Technology? Why “Build-versus-Buy” is Dead

Leave a comment

Still debating the build-versus-buy decision at your organization for your IT purchases?  If so, you probably aren’t getting the biggest bang for your IT dollar: Build-versus-buy is dead.  For better decision-making when acquiring IT systems, forget build-versus-buy and remember the Technology Acquisition Grid.  You’ll not only save money, you’ll make smarter decisions for your organization long term, increasing your agility and speeding time-to-market.

In this article, I describe Software-as-a-Service (SaaS), application hosting, virtualization and cloud computing for the benefit of CEO’s, CFO’s, VP’s and other organization leaders outside of IT who often need to weigh in on the these key new technologies.  I also describe how these new approaches have changed technology acquisition for the better – from the old build-versus-buy decision, to the Technology Acquisition Grid. Along the way, you’ll learn some of the factors that will help you decide among the various options, saving your organization time and money.

The Old Model: Build-versus-Buy

When I earned my MBA in Information Systems in the mid-1990’s, more than one professor noted that the build-versus-buy decision was a critical one because it represented two often-costly and divergent paths.  In that model, the decision to “build” a new system from scratch gave the advantage of controlling the destiny of the system, including every feature and function.  In contrast, the “buy” decision to purchase a system created by a supplier (vendor) brought the benefit of reduced cost and faster delivery because the supplier built the product in advance for many companies, then shared the development costs across multiple customers.

Back then, we thought of build versus buy as an either-or decision, like an on-off switch, something like this:

Buld-versus-Buy Switch

In the end, the build-versus-buy decision was so critical because, for the most part, once you made the decision to build or buy, there was no turning back.  The costs of backpedaling were simply too high.

The Advent of Application Hosting, Virtualization, SaaS and Cloud Computing

During the 2000’s, innovations like application hosting, virtualization, software-as-a-service (SaaS) and cloud computing changed IT purchasing entirely, from traditional build-versus-buy, to a myriad of hosting and ownership options that reduce costs and speed time-to-market.  Now, instead of resembling an on-off switch, the acquisition decision started to look more like a sliding dimmer switch on a light, like this:

 

Build-versus-Buy Slider

Suddenly, there were more combinations of options, giving organizations better control of their budgets and the timeline for delivering new information systems.

What are each of these technologies and how do they affect IT purchasing?  Here’s a brief description of each:

Application Hosting

During the dot-com era, a plethora of application-service-providers (ASPs) sprung up with a new business model.  They would go out and buy used software licenses, then host the software at their own facilities, leasing the licenses to their customers on a monthly basis.   The customers of ASPs benefit from the lower cost-of-ownership and reduced strain on IT staff to maintain yet another system, while the ASPs made money by pooling licenses across customers and making use of often-idle software licenses.

While the dot-com bust put quite a few ASPs out of business, the application hosting model, where the software runs on hardware supported by a hosting company and customers pay monthly or yearly fees to use the software, still survives today.

Virtualization

One of the first technologies to change the build-versus-buy decision was virtualization. By separating the hardware from the software, virtualization separates the decision to buy from the need for new software.  In virtualization, first, computer hardware is purchased to support the organization’s overall technology needs.  Then, a self-contained version of a machine – a “virtual” machine – is installed on the hardware, along with application software, such as supply chain or human resources software, that the business needs at that point in time.

When the organization needs a new software application that is not compatible with the first application, because it runs on another operating system, they install another virtual machine and another application on the same hardware.  By doing this, the organization not only delivers software applications more quickly because it doesn’t need to buy, install and configure hardware for every application, the organization also spends less on hardware, because it can add virtual machines to take advantage of unused processing power on the hardware.

Even better, virtual machines can be moved from one piece of hardware to another relatively easily, so like a hermit crab outgrowing its shell, applications can be moved to new hardware in hours or days instead of weeks or months.

Software-as-a-Service (SaaS)

Like virtualization, Software-as-a-Service, or SaaS, reduces the costs and time required to deliver new software applications.  In the most common approach to SaaS, the customer pays a monthly subscription fee to the software supplier based on the number of users on the customer’s staff during a given month.  As an added twist, the supplier hosts the software at their facilities, providing hardware and technical support, all within the monthly fee.  So, as long as a reliable Internet connection can be maintained between the customer and the SaaS supplier, the cost and effort to support and maintain the system are minimal.  The customer spends few resources and worries little about the software (assuming the SaaS supplier holds their side of the bargain), enabling the organization to focus on serving it’s own customers, instead of on information technology.

Cloud Computing

The most recent technology innovation among the three, cloud computing brings together the best qualities of virtualization and SaaS.  Like SaaS, with cloud computing both hardware and software are hosted by the supplier.  However, where the SaaS model is limited to a single supplier’s application, cloud computing uses virtual machines to host many different applications with one (or a few) suppliers.  Using this approach, the software can be owned by the customer, but hosted and maintained by the supplier.  When the customer needs to accommodate more users, the supplier sells the customer more resources and more licenses “on demand”.  Depending upon the terms of the contract, either the customer’s IT staff maintains the hardware, or the supplier.  In addition, in most cases, the customer can customize the software for their own needs, to better represent the needs of their own customers.

Adding Application Hosting, Virtualization and Cloud-Computing to the Mix – The Technology Acquisition Grid

Remember the dimmer switch I showed a few moments ago?  With the addition of application hosting, virtualization, SaaS and cloud computing to the mix, it’s not only possible to choose who owns and controls the future of the software, it’s also possible to decide who hosts the software and hardware – in-house or hosted with a supplier, as well as how easily it can be transferred from one environment to another.  That is, it’s now a true grid, with build-to-buy on the left-right axis, and in-house-to-hosted on the up-down axis.  The diagram below shows the Technology Acquisition Grid, with the four main combinations of options to consider then acquiring technology.

Technology Acquisition Grid

 

Here’s where application hosting, SaaS, virtualization and cloud computing fit into the Technology Acquisition Grid:

Technology Acquisition Grid with New Technologies

 

Making a Decision to Host, Virtualize, go SaaS, or seek the Cloud

If the rules of the game have now changed so much, how do we make the decision to use virtualization, application hosting, SaaS or cloud computing, as opposed to traditional build and buy?  There seem to be a few key factors that drive the decision.

At the most basic level, it comes down to how much control – and responsibility — your organization wants over the development of the software and the maintenance of the system.  Choose an option in the top-left of the Technology Acquisition Grid, and you have greater control of everything; choose an option at the bottom-right, and you have far less control and far less responsibility for the system.

In my own experience advising clients during technology acquisition and leading technology initiatives, decision-makers tend to choose a “control everything” solution because it’s the easiest to understand and poses the least risk.   While this may, in the end, be the best answer, organizations should weigh the other options, as well.  Certainly, more control usually sounds really good, but it almost always comes along with much higher costs, as well as delaying use of the system by months.  Particularly for smaller organizations,  which probably need those IT dollars to serve their own customers more effectively, a “control everything” answer is often the wrong decision.

Which should your organization choose?  Start by making an effort to include software products that take advantage of hosting, virtualization, SaaS and cloud computing among your choices when you start your search.  Then, weigh the benefits and downsides of each option and combination of options, choosing the one that balances cost and time-to-market with your own customer’s needs and your tolerance for risk. A good consulting company like Cedar Point Consulting can help you do this, as can your organization’s IT leadership.  Using this approach, you’re sure to free yourself from the old rules of build-versus-buy, delivering more for your own customers at a much lower cost.

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in technology strategy, project management and process improvement. Cedar Point Consulting can be found at http://www.cedarpointconsulting.com.

 

Advertisements

In Bank Wars, the Customer Strikes Back

Leave a comment

As a long-time observer of the financial services industry, I was wondering how long Bank of America could hold onto plans for a new $5 monthly fee on debit accounts, starting in 2012.  The answer: About a month.  Earlier today, B of A announced that they are ending plans for the new fee roughly 32 days after it was announced, according to the Huffington Post. (Some wise senior manager at the bank probably had a monthly report dropped on their desk today showing high customer abandonment rates, shot through the roof, and called off the fee.)

In the past, banks have had the wiggle room to both add and increase fees, nudging them up to increase profits or stave off losses.  That, of course, was in the era before the Tea Party, Occupy Wall Street and a myriad of other similar populist uprisings.

Last quarter, NetFlix CEO Reed Hastings learned what it means to draw the ire of the American consumer during tough economic times, when Hastings was forced to apologize for a planned product spin-off that amounted to a 60% price increase by the company.  Nearly 600,000 Netflix customers dropped the service after the price increase was initially announced and before his apology.

More

Departing Waterfall – Next Stop Agile

5 Comments

It’s been more than a year since I penned, “Before Making the Leap to Agile“, an article intended to guide everyone from C-level executives to IT project managers on the adoption of Agile. The goal was to offer up some of the lessons I learned through actual implementations, so that readers could avoid of some of the pitfalls associated with Agile adoption.  While a few saw it as an assault on Agile, many understood that my goal was to assist Agile adopters and thanked me for writing it.

Five-thousand-plus page views later on the last article, and I’ve finally cleared my plate enough to address an equally important topic, why people, and organizations, are making the shift to Agile from the more typical Waterfall. After all, Agile is a revolutionary approach to software development and it continues to grow in popularity, so I think it’s important for those who do not yet use Agile to understand why others have embraced it.

More

The Four-Funnel Approach — Strategic Planning for Small Businesses

2 Comments

According to Confucius, “An unpointed arrow never reaches its target.” Yet, how many small business owners don’t do any strategic planning, hoping that floating in the wind will bring them a bulls-eye and success?

Arguably, strategic planning is even more important to many small businesses, including start-ups, technology companies and those in highly-competitive markets, so it’s critical that small business owners create and follow strategic plans. Plus, it’s New Years Day. What better time to start strategic planning for your small business?

Sure, finding a simple way to quickly build a strategic plan is much of the problem. While the Balanced Scorecard is, in many ways, a better strategic planning system, it takes quite a while to develop a strategic plan using Balanced Scorecard and requires training in the methodology to carry out effectively. As a result, the Balanced Scorecard is rarely available to small businesses, who can not afford to hire an expert facilitator for a multi-week endeavor.

Should small business owners simply give up, assuming good strategic planning is out of reach? Of course not. There is an effective way for small businesses to create and executed strategic plans – the Four-Funnel model. While not as good as the Balanced Scorecard, in my opinion, the Four-Funnel approach is simpler, faster and can be done by a business as small as one or two individuals.

In this article, I outline the Four-Funnel model, describe how it can help small businesses to create solid strategic plans.

A Little Four-Funnel Background

I originally encountered the makings of the four-funnel strategic planning approach in business school at RH Smith at the University of Maryland, where Dr. Brad Wheeler (now at Indiana) described a four-funnel approach to problem-solving. In his approach, he drew four funnels on a white board and labeled them, “Identify Problems”, “Prioritize Problems”, “Identify Solutions” and “Select Solutions”. The diagram looked something like this:

Four-Funnel Problem Solving

As you can see, the four funnels represent the increase and decrease of information as you complete each step in the problem-solving process. First, identifying problems gives you a list possible problems; prioritizing those problems reduces the list to only those problems that are most important; identifying solutions gives you a range of solutions for each problem; while selecting solutions again reduces your list to only the best solutions for the most critical problems. In short, there’s really nothing radical about four-funnel problem-solving, but it’s simple and it works.

Fast-forward to our strategic management course (also at RH Smith), where we were expected to use business cases to develop strategic plans in a matter of hours. Certainly, we learned the strategic planning process in depth during the class, but our challenge was to deliver a good plan in a very short period of time, particularly for case competitions. In response, my team and I applied the four-funnel approach, this time to strategic planning:

Four-Funnel Strategic Planning

Since then, I’ve used four-funnel with my own small business more than a half-dozen times, while coaching other small businesses through the process. It only takes a half-day or so to do, so it’s not too much time and effort, considering the big payoff.

Four-Funnel Strategic Planning Steps

Ready to start? Here’s each step:

  1. Identify strategic needs. Using SWOT analysis or a similar technique, write down the strengths, weaknesses, opportunities and threats relevant to your company and your industry. Many of these are easy to identify — a competitor just moved in down the street (Threat), only one staff member knows how to operate a particular machine (Weakness), you just received recognition as the best in your region at what you do (Strength); or, one of your vendors just created a new product and offered you exclusive distribution rights in your area (Opportunity).
  2. Prioritize Strategic Needs. Along with your co-workers you’re bound to come up with some very solid strategic needs, especially if you do a little homework about industry trends before you meet. But, if you stop there and try to achieve all of them, you’ll almost certainly fail. Before you move forward, you need to prioritize your strategic needs, eliminating the ones that are least likely to be successful and produce the least value. I suggest narrowing down your list to four or five strategic needs for your entire business if you have fewer than ten people, and no more than three needs per department (e.g, marketing) if you’re larger.
  3. Identify Strategic Actions. For each high-priority strategic need, brainstorm ways to meet that need or take advantage of that opportunity. Your possible actions don’t need to be long or detailed — a one sentence explanation is enough. And, be sure to give everyone an opportunity to suggest actions — you’ll be surprised when you find that the most innovative marketing ideas don’t necessarily come from your marketing manager or the best technology initiatives don’t come from IT.
  4. Select Strategic Actions. Just as you prioritized strategic needs, you need to do the same for your actions. For each high-priority strategic need, pick one or two of the best actions for you and your business to take during the coming year. Make sure their achievable and affordable – no point in risking your current business on a long shot.
  5. Assign and Act. As a team, assign someone in your business to complete each strategic action. As you do, make certain the person assigned has the authority, knowledge and resources — including funding — to complete the action. If they don’t, you’re merely setting them — and you — up for failure. In addition, be sure to stagger out the deadlines for each strategic action throughout the course of the year. Otherwise, you and your team will spend December rushing around to complete them, learning to hate the strategic planning process rather than appreciate its value.

Does it Work?

It’s not a miracle cure-all, but the four-funnel approach does work. In my experience, businesses who adopted four-funnel strategic planning grew around 20-30% per year, while those same businesses grew at 0-10% before hand.  (My own business grew triple-digits every year but one, so of course I’m a big believer in four-funnel).  As a lawyer would say, that’s not a guarantee of future success, but past results have been good.

When you try the process yourself, you’ll find it takes between four and eight hours for a group of three or four to complete the four-funnel approach, depending upon the size of your business. That’s not much of a time commitment, considering it’s going to benefit your organization for the next year, and beyond.

Give it a try. And, if you like some assistance, contact Cedar Point Consulting and we can help.

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in project management, process improvement, and small business strategy.  Cedar Point Consulting can be found at http://www.cedarpointconsulting.com.

Intuitive to Whom? (In Web Design, it Matters)

Leave a comment

During a recent Management Information Systems course I taught for the University of Phoenix, I posed the discussion question to students, “What do you think are the most important qualities that determine a well-designed user interface?” While responses were very good, nearly all of my students used the term “intuitive” in their response without providing a more detailed description, as though the term has some universal, unambiguous meaning to user interface (user experience) designers and web users alike.

I responded by asking, “Intuitive to whom?…Would a college-educated individual and a new-born infant both look at the same user interface and agree it is intuitive? Or, would the infant prefer a nipple providing warm milk to embedded-flash videos of news stories?”

Far from obvious, an “intuitive” user interface is extremely hard to define because “intuitive” means many different things to many different people. In this article, I challenge the assumption that “intuitive” is obvious and suggest how we can determine what intuitive “is”.

Nature and Nurture

Our exploration of intuitive user interfaces and user experience starts with “nature” and “nurture”, much like the “Nature versus Nurture” debate that occurs when explaining the talents and intelligence of human beings. For those of us who haven’t opened a genetics book in a few decades, if ever, “Nature” assumes that we have certain talents at birth, while “Nurture” proposes that we gain talents and abilities over time.

Certainly, “Nature” plays a role in an intuitive user interface. According to research by Anya Hurlbert and Yazhu Ling, there’s a great deal of evidence that we are born with color preferences and that color preferences naturally vary by gender. In addition, warning colors like red or yellow, such as red on stop signs and yellow on caution signs, are likely a matter of science and genetics rather than learned after we’re born. So, an “intuitive” interface is partly determined by our genes.

“Nurture” also plays a big role in determining our preferences in a user interface. For example, link-underlining on web pages and word density preferences are highly dependent upon your cultural background, according to Piero Fraternali and Massimo Tisi in their research paper, “Identifying Cultural Markers for Web Application Design Targeted to a Multi-Cultural Audience.” While research in personality and user interfaces is still in its infancy, there’s a strong indication that CEO’s have different color preferences from other individuals, as Del Jones describes in this USA Today article.

But, what about navigation techniques, like tabs and drop-down menus? In a recent conversation with Haiying Manning, a user experience designer with the College Board, I was told that “tabs are dead.” This crushed me, quite frankly, because I still like tabs to effectively group information and have a great deal of respect for Haiying’s skills and experience. As a Gen-Xer who spent much of his teen years sorting and organizing paper files on summer jobs, I’m also very comfortable with tabs in web interfaces, as are my baby-boomer friends. My Net-Gen (Millenial) friends seem to prefer a screen the size of a matchbox and a keyboard with keys the size of ladybugs, which I have trouble reading.

In the end, because of “Nature” and “Nurture”, the quest for an “intuitive” user interface is far more difficult than selection of a color scheme and navigation techniques everyone will like. What appeals to one gender, culture or generation is unlikely to appeal to others, so we need to dig further.

It’s all about the Audience

In looking back on successful projects past, the best user interface designers I’ve worked with have learned a great deal about their audience – not just through focus groups and JAD sessions, but through psychometric profiling and market research. This idea of segmenting audiences and appealing to each audience separately is far from new. Olga De Troyer called it “audience-driven web design” back in 2002, but the concept is still quite relevant today.

Once they better understood their target customers, these UI designers tailored the user interface to create a user experience that was most appealing to their user community. In some cases, they provide segment-targeted user interfaces – one for casual browsers and one for heavy users, for example. In other cases, they made personalization of the user interface easier, so that heavy users could tailor the interface based on their own preferences.

They also mapped out the common uses (use cases or user stories) for their web sites and gave highest priority to the most used (customer support) or most valuable (buying/shopping) uses, ensuring that they maximized value for their business and the customer. More importantly, the user interface designers didn’t rely upon the “the logo always goes at the top left” mind-set that drives most web site designs today.

Think about the Masai

In hopes of better defining what “intuitive” is, I spoke with Anna Martin, a Principal at August Interactive and an aficionado of web experience and web design. Evidently, “intuitive” is also a hot topic with Anna, because she lunged at the topic, responding:

“Would you reach for a doorknob placed near the floorboard; or expect the red tube on the table to contain applesauce? Didn’t think so. But what’s intuitive depends largely on what you’re used to.  Seriously, talk to a Masai nomad about a doorknob – or ketchup for that matter – and see what you get. And good luck explaining applesauce. (Cinnamon anyone?). Clearly intuition is dependent on what comes NATURALLY to a user – no matter what the user is using.

So why would the web be any different? It’s not. Virtual though it may be, it’s still an environment that a PERSON needs to feel comfortable in in order to enjoy. Bottom line is this…if you wouldn’t invite your 6 year old niece or your 80 year old grandmother to a rage (did I just date myself?) then don’t expect that every website will appeal to every user.

Know your audience, understand what makes them comfortable; and most importantly try to define what ‘intuitive’ means specifically in regards to sorting, finding, moving, viewing, reading and generally experiencing anything in their generation.”

So, audience-driven web design has firmly embedded itself into the minds of great designers, who must constantly challenge the conventions to create truly creative interactive experiences on the web. Consequently, as the field of user design transitions into a world of user experience, it’s going to require second-guessing of many of the design conventions that are present on the web today. This not only means pushing the envelope with innovative design, it also means we need to have a good handle on what “intuitive” really is.

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in project management, process improvement, and small business strategy.  Cedar Point Consulting can be found at http://www.cedarpointconsulting.com.

Are You Planning to Crash?

Leave a comment

Nearly every experienced project manager has been through it. You inherit a project with a difficult or near-impossible schedule and the order comes down to deliver on time.  When you mention how far the project is behind, you’re simply told to “crash the schedule”, or “make it happen.”

As a long time project manager who now advises others on how best to manage projects and project portfolios, the term “schedule crashing” still makes me bristle. I picture a train wreck, not a well-designed product or service that’s delivered on time, and for good reason. While schedule crashing sounds so easy in theory, in practice schedule crashing is a very risky undertaking that requires some serious evaluation to determine whether crashing will actually help or hurt.

In this article, I’ll explain the underlying premise behind schedule crashing and describe some of the typical risks involved in a schedule crashing effort.  Then, I’ll provide seven questions that can help you assess whether schedule crashing will really help your project.  Combined, the schedule crashing assessment and the risks can be brought to executive management when you advise them about how best to proceed with your project.

Schedule Crashing Defined

As defined by BusinessDictionary.com, schedule crashing is “Reducing the completion time of a project by sharply increasing manpower and/or other expenses,” while the Quality Council of Indiana‘s Certified Six Sigma Black Belt Primer defines it as “…to apply more resources to complete an activity in a shorter time.” (p.V-46). The Project Management Body of Knowledge (PMBOK), fourth edition describes schedule crashing as a type of schedule compression, including overtime and paying for expedited delivery of goods or services as schedule crashing techniques (PMBOK, p. 156), though I generally think of overtime as another type of schedule compression – not crashing.

From a scheduling perspective, schedule crashing assumes that a straight mathematical formula exists between the number of laborers, the number of hours required to complete the task, and the calendar time required to complete the task. Said simply, if a 40-hour task takes one person five days to complete (40 hours/one person * 8 hours/day=5 days), then according to schedule crashing, assigning five resources would take one day (40 hours/5 people*8 hours/day=1day).

The Risks of Crashing

Frederick Brooks had much to say about the problems with schedule crashing in, “The Mythical Man-Month“. In this ground-breaking work about software engineering, Brooks explains that there are many factors that might make schedule crashing impractical, including the dependency of many work activities on their preceding activities and the increased cost of communication. This phenomena is now referred to as Brook’s Law–adding resources to a late project actually slows the project down. I personally saw Brook’s Law in action on a large program led by a prestigious consulting firm where the client requested that extra resources be added in the final two months of the program; because the current resources were forced to train new staff instead of complete work, the program delivered in four more months instead of two.

Additional risks of crashing include increased project cost if they crashing attempt fails, delayed delivery if the crash adversely impacts team performance, additional conflict as new team members are folded into the current team to share responsibility, risks to product quality from uneven or poorly coordinated work, and safety risks from the addition of inexperienced resources.

In short, schedule crashing at its most extreme can be fraught with risks. Managers at all levels should be very cautious before recommending or pursuing a crashing strategy.

Making the Call to Crash

So, how can a project manager decide if crashing will help? Here are seven questions I ask myself when deciding if crashing is likely to succeed:

  1. Is the task (or group of tasks) in the critical path? Tasks in the critical path are affecting the overall duration and the delivery date of your project, while tasks outside of the critical path are not affecting your delivery date. Unless the task your considering crashing is in the critical path or will become a critical task activity if it substantially slips, crashing the activity is a waste of resources.
  2. Is the task (or group of tasks) long? If the task is short and does not repeat over the course of the project, then it’s unlikely you’ll gain any benefit from crashing the activity. A long task or task group, however, is far more likely to benefit from the addition of a new resource, as can tasks that require similar skills.
  3. Are appropriate resources available? Crashing is rarely useful when qualified resources are not available. Is there a qualified person on the bench who can be added to the project team to perform the work? If not, can someone be brought in quickly who has the needed skills? Recruiting skilled resources is a costly and time-consuming activity, so by the time the resource(s) are added to your team, the task may be complete and your recruiting efforts wasted.
  4. Is ramp-up time short? Some types of projects require a great deal of project-specific or industry-specific knowledge and it takes time to transfer that knowledge from the project team to the new team members. If the ramp-up time is too long, then it may not make sense to crash the schedule.
  5. Is the project far from completion? Often, people consider crashing when they’re near the end of a project and its become clear that the team will not meet it’s delivery date. Yet, this may be the worst time to crash the schedule. Frederick Brooks told the story about his schedule crashing attempt in “The Mythical Man-Month” where he added resources to one of his projects at the tail end, which further delayed delivery. In most cases, schedule crashing is only a viable option when a project is less than half complete.
  6. Is the work modular? On many projects, the work being delivered is modular in nature. For example, in automotive engineering, it’s possible for one part of the team to design the wiring for a new vehicle model while another part of the team designs the audio system that relies upon electricity, as long as points of integration and dependencies are defined early. Through fast-tracking, or completing these tasks in parallel, it becomes beneficial to also add resources, crashing the schedule.
  7. Will another pair of hands really help? All of us have heard that “too many cooks can spoil the broth,” but this also applies to engineering, software development and construction. Consider where the new resources would sit, how would they integrate with the current team, would their introduction cause an unnatural sharing of roles?

If you’ve answered these questions and responded “yes” to at least five of the seven questions, then you have a reasonably good project-crashing opportunity; a “yes” to three or four is of marginal benefit, while a “yes” to only one or two is almost certain to end for the worse.

Alternatives to the Crash

Fortunately, there are alternatives to schedule crashing that may be more appropriate than the crash itself.

  1. Increase hours of current resources. For a limited time period and within reason, asking current team members to work overtime can help you reach your delivery date more quickly than schedule crashing. When considering overtime, it’s important to remember the caveats, “a limited time period” and “within reason”. Asking resources to work 50-60 hours a week for six months is unreasonable, as is asking resources to work 70 hours per week for a month for all but the most critical projects.
  2. Increase efficiency of the current team. Though it’s surprisingly rare on projects, examining current work processes and adding new time-saving tools can improve the productivity of a team by 10% to 50% or more if a project is long. I once led a team that increased it’s productivity by roughly 30% simply by re-sequencing work activities and adding a single team member to speed up cycle time at a single step in the process.
  3. Accept the schedule. In some cases, it’s better to offset the downside effects of late delivery rather than attempt to crash the schedule. In some cases, this amounts to using a beta or prototype for training rather than a production-ready product.

A Final Caution About Crashing

Because it’s rarely well understand by anyone other than project managers, schedule crashing is often recommended by co-workers who really don’t understand the implications of the decision.  While they see an opportunity to buy time, they almost never see the inherent risks.

As a result, it’s critical that project managers not only assess the likelihood of success when considering crashing as an option, they also educate their stakeholders, their sponsor and other decision-makers about the risks of a schedule-crashing approach.  Doing anything less perpetuates the myth that crashing is a panacea that cures all that ails a late project, potentially creating much bigger problems for everyone down the road.

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in project management, process improvement, and small business strategy. Cedar Point Consulting can be found at http://www.cedarpointconsulting.com.

For Successful Business Leaders, Sometimes it’s Right to be Wrong

Leave a comment

One dreary October day in the late 90’s, I sat in my local Fidelity office waiting to shift funds from one account to another, a common practice in an era before online banking and financial services. That fifteen minutes sitting would have remained forever unmemorable, were it not for the fact that I picked up a business magazine sitting on the coffee table next to me and read a brief article on CEO’s and decision-making.

According to the article, researchers studied the decision-making of CEO’s at both successful and unsuccessful businesses, categorizing their strategic decisions along two dimensions — correct/incorrect and fast/slow, as shown in the table below:

CEO Decision-Making Success Fast Slow
Correct    
Incorrect    

As you might surmise from the labels on the table, “correct” was defined as making decisions that accurately gauged the market, adapted to changes in the business environment, and made new expenditures or trimmed costs in ways that helped their businesses to out-perform competitors; “incorrect” decisions were the opposite.  Along the other dimension, “fast” decision-makers were among the first to make a decision–right or wrong–and then act on the decision, while slow decision-makers took their time, often deciding and acting well after the counterparts in their industry.

Not surprisingly, the CEO’s who made fast, correct decisions led the most successful businesses, while the CEO’s who made slow, incorrect decisions were the least successful. However, the second most successful group of CEO’s came as quite a surprise to me, ultimately affecting how I lead and make decisions to this day. It turns out that the second-most successful CEO’s made fast-but-wrong decisions — not the CEO’s who made slow-but-correct ones. The completed table below summarizes this:

CEO Decision-Making Success Fast Slow
Correct 1stMost successful CEO’s 3rdThird-most successful CEO’s
Incorrect 2ndNext most successful CEO’s 4thLeast successful CEO’s

Why were fast-but-incorrect CEO’s the second most successful group? It turns out the slow-moving-yet-correct CEO’s were simply too slow to take advantage of changing business landscape. They waited too long, letting good opportunities slip by and causing their businesses to under-perform. However, the fast-yet-incorrect CEO’s did something that was really not very difficult–they monitored the results of their decisions and, when they determined they were wrong, they corrected their mistakes.

All of this makes thorough, complete analysis and extreme caution – even in the worst of business climates — look like a pretty bad decision-making model.  Sure, we should base our decisions on facts, research and data, weighing the options along with our trusted advisers. But, we shouldn’t wait until the last piece of information finally makes its way to our desk, assuming that having a complete picture is the only way to certain success.  Because if we do, we’ll probably be too late.

(If you’ve read my previous articles, you’ll notice that I’m pretty thorough about citing material appropriately. The article to which I refer needs an appropriate reference, but while I’ve looked and looked, I simply can’t find the original article, published in either Fast Company or Inc Magazine between 1996 and 1998.  Certainly, the publisher and the researchers deserve the credit, so if you know of this article, send me an e-mail and I’ll give credit where it’s due).

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in project management, process improvement, and small business strategy. Cedar Point Consulting can be found at http://www.cedarpointconsulting.com.

Older Entries

%d bloggers like this: