Core Processes
Identification
Identification in configuration management is the initial process of selecting, defining, and documenting the configuration items (CIs) that require control throughout the product lifecycle. This step establishes a clear product structure by identifying functional and physical attributes of hardware, software, firmware, and documentation, ensuring traceability and consistency from design to disposal. Configuration identification forms the basis for all other CM functions by specifying what elements are subject to management.[26][7]
The process begins with selecting CIs based on established criteria, including their criticality to system performance, volatility (frequency of changes), and interfaces with other components. Items meeting these criteria—such as key subsystems, interfaces, or documents—are designated as CIs to focus control efforts on elements that impact safety, quality, or functionality. Once selected, each CI is assigned a unique identifier, along with attributes like version numbers, revision levels, dependencies, and status information, often recorded in engineering drawings or bills of material (BOMs). ISO 10007 emphasizes that this documentation must capture all aspects defining the CI at a given point, enabling precise tracking.[27][7]
To support identification, organizations employ repositories or databases as centralized tools for cataloging CIs and maintaining their records. These repositories enable the management of variants—different forms of a CI—and assemblies by establishing hierarchical links between lower-level items and higher-level configurations, such as through product structure trees or BOMs. This approach ensures that assemblies, like integrated subsystems, are treated as cohesive units while accommodating variations due to manufacturing or customer specifications.[27][7]
A key challenge in identification is balancing the level of detail to avoid over-identification, which introduces excessive complexity and administrative burden, or under-identification, which creates gaps in control and risks non-compliance. In complex systems like aircraft assemblies, over-identifying minor components such as chassis or tires can inflate documentation requirements and prolong integration, while under-identifying critical interfaces or material treatments may hinder airworthiness certification and delay delivery. Effective selection mitigates these risks by aligning CI granularity with lifecycle needs and organizational resources.[28]
The primary outputs of the identification process are detailed configuration item records, which serve as the authoritative reference for CI attributes, and initial baselines that snapshot the approved configuration at key milestones. These baselines provide a stable reference point for ongoing management.[7][27]
Control
In configuration management (CM), the control process ensures that modifications to configuration items (CIs) are deliberate, evaluated, and authorized to maintain system integrity and prevent unauthorized alterations.[29] This involves systematic procedures for proposing, assessing, and implementing changes while minimizing disruptions to functionality, performance, and reliability. The primary goal is to balance the need for evolution with the preservation of established baselines, drawing on identified CIs as the foundation for change proposals.[1]
The Change Control Board (CCB) plays a central role in overseeing the change control process by reviewing proposed modifications to hardware, firmware, software, and documentation.[30] Composed of qualified representatives from technical, logistical, and programmatic disciplines, the CCB evaluates changes based on key impact criteria, including cost (resource requirements and affordability), risk (technical, operational, and safety implications), and schedule (effects on timelines and deliverables).[31] Approval recommendations from the CCB are forwarded to the Configuration Approval Authority (CAA), often the program manager, ensuring decisions align with project objectives.[31]
The change control workflow typically begins with the submission of a formal request, such as an Engineering Change Proposal (ECP), which documents the proposed modification, rationale, and potential impacts.[31] This is followed by analysis, where the CCB assesses the change's effects on existing CIs, including compatibility and downstream consequences, often classifying it as major (Class I, requiring CAA approval) or minor (Class II, delegable to lower levels).[31] Decision-making occurs through CCB deliberation, culminating in approval, rejection, or deferral; upon approval, implementation proceeds with testing and verification before integration.[29] Emergency changes, which address urgent issues like security vulnerabilities or service disruptions, follow an expedited path with abbreviated review—such as a Request for Variance (RFV)—but still require post-implementation documentation and CCB ratification to mitigate risks.[31] This workflow supports status reporting by logging decisions for traceability.[29]
Versioning techniques track the evolution of CIs through controlled releases, with semantic versioning providing a structured approach using the MAJOR.MINOR.PATCH format: MAJOR increments for incompatible changes, MINOR for backward-compatible feature additions, and PATCH for bug fixes.[32] This method ensures clear communication of change significance, facilitating dependency management and rollback in software and system configurations.[32]
Post-approval, the control process integrates changes into baselines by updating the approved configuration snapshot, which serves as the new reference for future modifications and ensures ongoing consistency across the system's lifecycle.[8] This update formalizes the change, incorporating verified implementations to reflect the evolved state without compromising prior stability.[31]
Status Accounting
Status accounting, a core function of configuration management, involves the systematic recording and reporting of configuration information to provide visibility into the status of configuration items (CIs) throughout their lifecycle.[5] It ensures that accurate, timely data on baselines, changes, and product attributes are maintained and accessible, supporting decision-making and traceability without requiring full re-verification of the entire system.[33] According to EIA-649C principles, status accounting captures and organizes data from CI identification through disposal, enabling consistency between requirements, documentation, and actual implementation.[33][34]
Key reporting mechanisms include logs of approved changes, baseline comparisons to highlight deviations, and metrics such as change frequency rates or compliance percentages to track progress and identify trends.[5] These reports are generated periodically or on demand, often for stakeholders during lifecycle reviews, and may include discrepancy lists that detail unresolved issues or variances from established baselines.[35] Data elements typically tracked encompass CI statuses—such as approved, implemented, or obsolete—along with unique identifiers, historical change records, and documentation versions to facilitate stakeholder reporting and analysis.[5] For instance, in government projects, status accounting maintains both current and historical records of deviations, waivers, and audit findings to support ongoing evaluations.[36]
Tools for status accounting often leverage databases for efficient querying and real-time data sharing, integrated with version control systems to automate updates and notifications.[5] Standardized formats, such as web-based dashboards or exportable reports, allow for easy access and correlation of configuration data, aligning with guidelines in ISO 10007 for maintaining lifecycle visibility.[2] The primary benefits include enabling trend analysis to predict potential issues, conducting historical audits efficiently, and reducing risks associated with configuration drift by providing a reliable audit trail.[35] This function ultimately enhances product support and maintenance by ensuring all parties have access to verified status information.[33]
Audit and Verification
Audit and verification in configuration management involve systematic processes to ensure that the actual configuration of items aligns with established baselines and requirements, thereby maintaining integrity and compliance throughout the lifecycle. These activities confirm that changes have been properly implemented and that documentation accurately reflects the current state, mitigating risks of errors or deviations that could impact performance or safety.
Configuration audits are categorized into three primary types: functional, physical, and compliance. Functional audits verify that the performance and functional attributes of a configuration item meet the specified requirements, often through testing and analysis of operational data. Physical audits inspect the tangible attributes of the item, such as materials, dimensions, and assembly, to ensure they conform to design documentation. Compliance audits assess adherence to applicable standards, regulations, and contractual obligations, confirming that the configuration supports broader organizational or legal requirements.[37][38][39]
Key verification methods include formal configuration audits, such as the Functional Configuration Audit (FCA) and Physical Configuration Audit (PCA) as defined in Department of Defense (DoD) standards. The FCA examines test results and performance data to validate that the configuration item satisfies its functional specifications, while the PCA reviews the as-built product against approved documentation to identify any variances. Discrepancy resolution processes follow these audits, involving identification of inconsistencies, root cause analysis, and implementation of corrective measures to align the configuration with baselines; unresolved discrepancies may trigger further reviews or redesigns.[37][38][40]
Audits are typically conducted periodically as outlined in the configuration management plan, with frequency depending on the scale, risks, and requirements of the operation. Triggers for ad-hoc audits include major changes, such as system upgrades or incident responses, to promptly verify post-change integrity. Outcomes often include corrective actions, such as updates to documentation or reconfiguration, with records integrated into status accounting for traceability.[41][42][36]
Metrics for evaluating audit effectiveness focus on audit findings rates, which measure the proportion of identified discrepancies relative to total items reviewed, and resolution times, tracking the duration from discrepancy detection to corrective action completion. For instance, a high findings rate may indicate process weaknesses, while timely resolution times support efficient compliance. These metrics, derived from audit reports, help quantify the maturity of verification practices.[43][44]