Posted

Requirement Was Clear, Proposal Wasn’t: GAO Upholds Rejection Over QC Plan Omission

In QA Engineering, LLC, B-423716, B-423716.2 (Sept. 30, 2025​), QA Engineering protested the U.S. Army Corps of Engineers’ decision to award a contract for the construction of a pre-engineered metal building (PEMB) to Koman Advantage. QA argued that the agency improperly found its proposal technically unacceptable because it did not address quality control for off-site fabrication, a requirement it claimed was not clearly stated in the solicitation. QA also contended that its proposal did meet this requirement and that other offerors were treated more favorably despite submitting similar responses. GAO rejected all of these claims, finding that the solicitation clearly required an off-site fabrication quality control discussion, that QA failed to provide one, and that other offerors met the requirement. The protest also raised additional arguments regarding inconsistent evaluator scoring and consensus ratings, but GAO found no merit in those claims.

The Decision

GAO denied the protest, ruling that:

  1. Failure to Address Off-Site Fabrication Was a Clear Deficiency: GAO concluded that the solicitation explicitly required offerors to describe their quality control approach for both on-site construction and off-site fabrication. QA’s proposal, however, focused solely on on-site efforts and did not include any meaningful discussion of fabrication, despite the pre-engineered metal building components being manufactured off-site. The only mention of off-site work appeared in a single sentence stating that a manager would coordinate inspections “on-site and off-site,” which GAO found insufficient to meet the solicitation’s requirement.
  2. No Unstated Evaluation Criteria Were Applied: QA argued that the agency introduced an unstated criterion by expecting discussion of PEMB quality control. GAO rejected this, explaining that while the term “PEMB” wasn’t repeated in the instruction, the requirement to address off-site fabrication applied directly to the pre-engineered metal components. The agency’s use of “PEMB” was simply shorthand for this clearly stated requirement, not an unstated criterion.
  3. GAO Rejected the Disparate Treatment Claim: QA also claimed that other offerors failed to address off-site fabrication and were unfairly rated as technically acceptable. GAO disagreed. The record showed that other offerors did substantively discuss their off-site fabrication approach, including subcontractor roles and quality control procedures. GAO found that QA failed to demonstrate that it was treated differently from offerors with substantively identical proposals, a requirement for any successful disparate treatment protest.
  4. Consensus Ratings Can Differ from Individual Scores: QA objected to the fact that individual evaluator worksheets gave different initial scores than the consensus findings. GAO reiterated that agencies may resolve differences through discussion and are not bound by the initial ratings so long as the consensus view is reasonable and documented. Here, the consensus judgment was fully consistent with the source selection decision and supported by the record.

Key Takeaways for Contractors

  1. Always Directly Address Every Solicitation Requirement: A well-written proposal must speak clearly to each element of the solicitation. GAO found that QA’s failure to discuss off-site fabrication—even briefly—was a valid reason to deem its proposal unacceptable.
  2. Red Team Reviews Help Catch Small but Critical Gaps: This case shows the value of having a “red team” to scrub proposals for completeness. A single oversight can derail an otherwise strong proposal submission.
  3. Disparate Treatment Claims Require Proof of Similarity: To prove unequal treatment, protesters must show that other offerors had the same flaws but were treated more favorably. Here, QA failed to meet that standard because other offerors did address the required topic.
  4. Consensus Evaluations Do Not Require Unanimity: Agencies are allowed to reconcile differences among evaluators and reach consensus. Disagreement alone between individual and final scores does not show that the agency erred.