Skip to main content

Stakeholders & Concerns

Understanding who your stakeholders are and what they care about is the starting point for any architectural work. This article frames stakeholders, their typical concerns, and how to elicit and prioritize them so architectural decisions align with the right outcomes. Scope: we focus on architecture-level concerns (mostly non-functional qualities and cross-cutting constraints), not detailed feature design. For how this topic fits alongside siblings, see Architecture vs. Design vs. Implementation and Architectural Decision Impact & Cost of Change.

Core ideas

  • Stakeholder: anyone who has a vested interest in the system or is affected by it (customers, internal users, business/product, engineers, operations/SRE, platform/infra, security, compliance/legal, data, support, partners, regulators).
  • Concern: a matter of interest to a stakeholder that architecture should address, often expressed as desired quality attributes (e.g., availability, performance, security) or constraints (e.g., regulatory, cost caps, tech choices).
  • Viewpoint: a template for describing the system from a perspective that addresses a set of concerns. Views applying these viewpoints help communicate how the architecture satisfies concerns. See Views & Viewpoints.
  • Quality attributes provide the language to express and test concerns. See Quality Attributes.

Typical stakeholders and their concerns

The list is illustrative; your context may include more (e.g., open‑source community, auditors) or fewer.

StakeholderTypical concernsExample measures/signals
Business/ProductTime‑to‑market, differentiation, roadmap feasibility, cost, riskCycle time, lead time, burn rate, OKRs
End Users/CustomersUsability, performance, reliability, accessibility, privacyCore Web Vitals, app latency, uptime/SLA, a11y checks
Engineering/TeamsModularity, testability, maintainability, devX, toolingChange failure rate, MTTR, code health metrics
Operations/SREAvailability, resilience, observability, capacity, run costSLO/SLI, error budgets, saturation, cloud spend
SecurityThreat surfaces, authn/z, data protection, secrets, supply chainSecurity posture, vuln MTTR, mTLS coverage, SBOM
Compliance/LegalData residency, PII handling, auditability, retentionEvidence artifacts, controls mapping, retention policy
Data/AnalyticsData quality, lineage, access patterns, schema evolutionFreshness, completeness, lineage trace, CDC stability
Platform/InfraStandardization, operability, portability, quota/costGolden path adoption, image provenance, quotas
Partners/IntegratorsStable contracts, SLAs, versioning, deprecation policyAPI error rates, version lifecycle, partner satisfaction
Support/CSDiagnostics, feature flags, error clarity, rollback pathsTicket volume, first‑response time, rollback MTTR

Related topics for deeper dives: Observability & Operations, Security Architecture, and Architecture Governance & Organization.

Eliciting and prioritizing concerns

  1. Identify stakeholders — start from the value stream: who builds, runs, uses, sells, supports, audits, or integrates with the system?
  2. Elicit concerns — interviews/workshops, review of incidents/postmortems, contracts/SLAs, regulatory commitments; convert concerns into testable quality attribute scenarios (stimulus → environment → response → measure). See Quality Attributes.
  3. Prioritize — use impact vs. likelihood, business value, and risk exposure. Establish explicit trade‑offs (e.g., latency vs. consistency, speed vs. safety).
  4. Trace to views and decisions — choose appropriate viewpoints and produce views that address the concerns. See Views & Viewpoints. Capture decisions as ADRs with rationale and consequences. See Architecture Decision Records (ADR).
  5. Validate — align with governance/review practices. See Review Boards & Design Reviews. Define acceptance criteria and, where possible, executable checks (tests, budgets, policy-as-code).

Decision flow

A decision flow for eliciting and prioritizing stakeholder concerns.

Examples (Scenarios)

Security demands PCI obligations; Product wants a new promo engine live in 4 weeks; SRE holds a 99.9% availability SLO. A feasible compromise: introduce a queue-backed promo calculator to decouple latency from spikes, keep the payment flow isolated under stricter controls, and adopt feature flags for controlled rollout; capture trade-offs in an ADR and define SLO-based alerts.
Partners require stable contracts and deprecation notices; Legal requires data residency. Provide region-pinned storage, versioned APIs with long deprecation windows, schema evolution with additive changes, and publish a compatibility policy.

Implementation notes and pitfalls

Implementation notes

  • Use a lightweight RACI for major decisions to clarify who approves vs. who is consulted.
  • Maintain a traceability matrix from concern → quality attribute scenario → view(s) → ADR(s) → tests/monitors.
  • Turn critical concerns into budgets and guardrails (latency/error budgets, cost/bandwidth budgets, policy-as-code for security/compliance).

Operational and observability considerations

  • Capture top concerns as SLIs/SLOs and wire dashboards early; align alerting to error budgets, not just static thresholds.
  • Ensure trace context and correlation IDs flow across all critical paths to tie concerns to actual runtime behavior. See Observability & Operations.

Common pitfalls

  • Solutionizing too early: jumping to technology choices before clarifying concerns and trade‑offs.
  • Ignoring “quiet” stakeholders: compliance, support, or downstream integrators not present in early meetings.
  • Treating security and operability as afterthoughts—these are primary concerns, not add‑ons.
  • Design by committee: lack of a clear decision owner stalls progress; use ADRs and RACI.

When to use

  • At project inception, when scoping an initiative or new architecture.
  • Before significant changes (e.g., major dependency, new region, multi‑tenant shift).
  • After incidents or major SLO breaches to re‑validate priorities.

When not to use

  • Tiny prototypes or throwaway spikes where architecture decisions are intentionally deferred.
  • When concerns are already well understood and validated for a very similar context—avoid re‑running heavy workshops; do a light refresh instead.

Design Review Checklist

  • Have all key stakeholders been identified?
  • Are the top 3-5 quality attribute scenarios defined and prioritized?
  • Have conflicting concerns been acknowledged and trade-offs documented?
  • Is there a clear mapping from concerns to architectural decisions?
  • Are there views that address the primary concerns of key stakeholders?
  • Have security, operational, and compliance concerns been treated as first-class requirements?

References

  1. ISO/IEC/IEEE 42010:2022 — Systems and software engineering — Architecture description ↗️
  2. Rozanski & Woods, Software Systems Architecture: Viewpoints and Perspectives ↗️
  3. SEI, Architecture Tradeoff Analysis Method (ATAM) — collection/overview ↗️
  4. Kazman, Klein, Clements. ATAM: Method for Architecture Evaluation (SEI Technical Report) ↗️