Measurement Techniques
Time Study
Time study is a direct observational technique in work measurement that employs a stopwatch to record the time taken by a qualified worker to perform a task under standard conditions, enabling the establishment of time standards for productivity analysis and planning.[8] The method, rooted in scientific management principles, involves systematically breaking down the task into short, measurable elements to capture precise timings and identify inefficiencies.[21]
The procedure begins with selecting a representative task and dividing it into elements, such as manual operations, machine time, or delays, each defined by clear start and end points for accurate observation.[8] Observations are then conducted using a stopwatch in methods like cumulative timing, where the watch runs continuously and readings are noted at element boundaries, or flyback timing, where it is reset after each element.[8] Multiple cycles—typically 10-20—are timed to ensure reliability, with the average observed time calculated for each element to account for natural variations.[8][21]
For effectiveness, the study requires a qualified worker who is experienced, trained, and capable of performing at a standard pace while maintaining quality and safety.[8] Standard conditions must prevail, including optimized methods, tools, materials, and environmental factors, to produce representative data.[8] A sufficient sample size, often determined statistically for 95% confidence and ±5% accuracy, further ensures the observations reflect typical performance.[8]
Performance rating evaluates the worker's speed and effectiveness relative to a normal pace, defined as 100% for a qualified worker exerting average effort without strain.[8] Ratings are assessed using scales like the Westinghouse system, considering factors such as skill, effort, working conditions, and consistency.[21] The normal time is then derived by adjusting the observed time for this rating:
where the rating factor is the performance rating divided by 100 (e.g., 110% yields a factor of 1.10).[8][21]
To obtain the standard time, allowances for personal needs, fatigue, and delays are added to the normal time, typically ranging from 5-15% depending on task demands.[8] The formula accounts for these as a percentage of working time:
This adjustment ensures the time standard is realistic and achievable over a full workday.[8][21] Time study is particularly suited to repetitive manual tasks, whereas estimating methods may be applied to more complex or irregular operations.[8]
Work Sampling
Work sampling, also known as activity sampling, is a statistical technique used in work measurement to estimate the proportion of time spent on various activities by conducting random observations over an extended period, rather than continuous monitoring. This method is particularly suited for analyzing irregular or variable tasks where direct timing would be inefficient or disruptive, such as in office environments or group production settings.
The methodology involves selecting random points in time to observe and record the activities being performed by workers or machines, ensuring that observations are unbiased and representative of the overall work cycle. To determine the required sample size for reliable estimates, the formula n=Z2×p×(1−p)e2n = \frac{Z^2 \times p \times (1-p)}{e^2}n=e2Z2×p×(1−p) is applied, where nnn is the number of observations needed, ZZZ represents the Z-score for the desired confidence level (e.g., 1.96 for 95% confidence), ppp is the estimated proportion of time for the activity (often initially set at 0.5 for maximum variability if unknown), and eee is the acceptable margin of error. This approach allows for probabilistic inference about time allocation without the need for exhaustive data collection.
In applications, work sampling excels for irregular tasks, such as maintenance operations or service-oriented roles where activities fluctuate unpredictably, and for group studies involving multiple workers or processes, enabling broad assessments of idle time, productive work, or delays across a facility. For instance, it has been widely adopted in manufacturing to evaluate machine utilization rates and in healthcare to measure staff activity distributions, providing insights that inform process improvements.
Analysis of work sampling data involves calculating the percentage of time devoted to each activity as (Number of observations of the activityTotal number of observations)×100\left( \frac{\text{Number of observations of the activity}}{\text{Total number of observations}} \right) \times 100(Total number of observationsNumber of observations of the activity)×100, which yields an estimate of the activity's share of total available time with a confidence interval derived from the sample size formula. These proportions can then be multiplied by the total working hours to estimate absolute time expenditures, facilitating comparisons and optimization.
A key advantage of work sampling lies in its non-intrusive nature, as intermittent observations minimize interference with normal operations and reduce observer bias compared to more intensive methods like continuous time studies, making it cost-effective for large-scale or long-term evaluations. This technique's statistical foundation ensures objectivity, though it requires careful random sampling to avoid temporal biases, such as overlooking peak or off-peak periods.
Predetermined Motion Time Systems
Predetermined motion time systems (PMTS) are analytical techniques used in work measurement to establish time standards by decomposing manual tasks into fundamental human motions and assigning predefined time values to each motion from standardized tables. These systems trace their roots to the work of Frank and Lillian Gilbreth, who developed therbligs—18 elemental motion units such as search, grasp, transport loaded, and position—that represent the basic building blocks of human activity in performing tasks. Therbligs enable a detailed breakdown of work sequences without requiring direct observation of workers, focusing instead on the physiological and mechanical aspects of motion to optimize efficiency and reduce fatigue.[22]
The most prominent PMTS is Methods-Time Measurement (MTM), first published in 1948 by Harold B. Maynard, John L. Schwab, and G.J. Stegemerten, building on Gilbreth's therbligs to create a rigorous framework for time predetermination. MTM uses time measurement units (TMUs), where 1 TMU equals 0.00001 hours (or 0.036 seconds), to quantify motions with high precision; for instance, the MTM-1 system analyzes tasks at a micromotion level, assigning times to elements like reach, grasp, and release based on variables such as distance and object weight. To accommodate varying levels of detail, MTM includes hierarchical systems: MTM-1 for detailed, short-cycle operations requiring fine analysis, and MTM-2 for coarser, longer-cycle tasks using grouped motions to expedite the process while maintaining accuracy. Another widely adopted system is MODAPTS (Modular Arrangement of Predetermined Time Standards), developed in the late 1960s by Chris Heyde, which simplifies analysis by coding body-part actions (e.g., move, get, put) in multiples of 0.129 seconds at a comfortable pace, emphasizing ease of application over MTM's micromotion granularity.[23][24][25]
In practice, PMTS application involves observing or describing a task, segmenting it into therbligs or equivalent motion elements, selecting appropriate time values from system tables, and summing them to yield the total standard time, often incorporating allowances for rest and delays. This process ensures consistency and repeatability, as times are derived from extensive empirical data rather than subjective assessments. A key advantage of PMTS is the elimination of rating bias inherent in observational methods, making it particularly valuable for designing standards for new processes, hazardous environments, or tasks where direct timing is impractical. These motion-based times can also be synthesized into higher-level standards for broader applications.[24][26][25]
Synthesis from Standard Data
Synthesis from standard data is a work measurement technique that establishes time standards for new or modified tasks by selecting and summing pre-measured elemental times from established databases, avoiding the need for direct observation of the entire operation. These databases, often called standard data systems, contain normal time values for common work elements derived from previous direct time studies or predetermined motion time systems.[8]
The method begins with breaking down the task into its constituent elements, such as machine setups, material handling, or tool adjustments, and then retrieving the corresponding times from the data bank. These elemental times are adjusted as necessary for specific conditions, including variations in worker performance, equipment, or environment, before being combined to form the total normal time. The standard time is then calculated by adding allowances for personal needs, fatigue, and unavoidable delays, using the formula Ts=∑te+AT_s = \sum t_e + ATs=∑te+A, where TsT_sTs is the standard time, ∑te\sum t_e∑te is the sum of selected elemental times, and AAA represents the allowance factor (typically 10-20% of normal time). For instance, in a power press operation, times for reaching, grasping, and positioning parts can be pulled from standard tables like MTM-2 and aggregated to estimate the full cycle.[8]
This approach offers significant advantages over conducting fresh time studies, particularly for repetitive elements, as it is faster, more cost-effective, and provides consistent results across similar tasks within an organization. By leveraging historical data, it minimizes subjectivity and enables rapid standard setting without halting production for observations. Standard data may occasionally incorporate values from predetermined motion time systems as a source for elemental times.[8]
The technique finds unique application in process design and the evaluation of task variants, where existing operations are reconfigured or scaled, allowing engineers to predict times reliably using proven building blocks from past studies. It is especially valuable in manufacturing environments with extensive records of elemental data, supporting planning, costing, and incentive schemes efficiently.[8]
Estimating Methods
Estimating methods in work measurement rely on expert judgment to predict task durations when direct observation or detailed analysis is impractical, such as for unique or infrequent jobs.[27] These approaches draw upon the accumulated experience of supervisors, engineers, or skilled operators to approximate the time required for performing specific tasks, often serving as a preliminary tool for planning and budgeting.[28] Unlike more structured techniques, estimation emphasizes qualitative assessment based on familiarity with similar operations, making it suitable for scenarios where historical data or standards are limited.[29]
The process typically involves comparing the new task to known benchmarks from past similar activities, while adjusting for influencing factors such as task complexity, worker skill levels, environmental conditions, and material variations.[27] Experts mentally simulate the workflow, factoring in setup times, potential delays, and execution steps to arrive at a time estimate, sometimes through group consensus to reduce individual biases.[29] This method requires no specialized equipment, relying instead on professional intuition honed over years of exposure to comparable work. For instance, in analytical estimating, tasks may be broken into components for more refined judgments, though general estimation remains holistic.[27]
Estimating is particularly prevalent in fields like construction and research and development (R&D), where projects often involve non-repetitive elements, such as custom builds or prototype development, necessitating quick approximations for scheduling and resource allocation.[29] In NASA's Kennedy Space Center operations, for example, engineers use estimation for Shuttle processing tasks akin to construction activities, incorporating "as-run" feedback from prior missions to refine predictions.[29] Accuracy tends to improve with the estimator's experience, as repeated exposure to real-world variances allows for better calibration of judgments over time.[29]
Despite these advantages, estimating methods exhibit higher variability and subjectivity compared to observational techniques, leading to potential inaccuracies that can affect planning reliability.[27] They are best employed as interim measures until more precise studies can be conducted, avoiding use in incentive-based systems where objectivity is critical.[28]