Partner Program

Maximize market potential through a partner program offering LeanIX solutions tailored to your business model.

Learn more
Business Capabilities

Business Capability Assessment

A business capability assessment evaluates the strength, maturity, and strategic relevance of each capability in your map. This page explains how to score capabilities, interpret results, and prepare for the next steps in enterprise architecture.

EN-LP-1P-Best-Practice-Business-Capability-Maps-2

Once an organization has defined its business objectives and built a clear business capability map, the next step is to evaluate how well each capability performs. A capability assessment provides this insight. It highlights which capabilities are strong, which need improvement, and where the organization should focus its attention.

Teams can run assessments at a pure business level by scoring importance, maturity, and strategic relevance. This is useful for an initial review of the capability model. The assessment becomes more valuable when capabilities are later linked to applications, users, and technology, because these connections explain why a capability is underperforming and what must change to improve it.

Capability assessment is therefore the practical bridge between mapping capabilities and planning initiatives. It gives leaders a fact-based view of where to invest, where to modernize, and how to align business and IT on the next steps in transformation. 

📚 Related: 2025 Gartner® Magic Quadrant™ for Enterprise Architecture Tools

 

What is a business capability assessment?

A business capability assessment evaluates how well an organization can perform the functions defined in its capability map. It examines each capability’s importance, current maturity, and readiness to support the organization’s strategic goals. The outcome is a clear view of where the business performs well and where targeted improvements are needed.

Capability assessment is different from process analysis or application reviews. Instead of focusing on activities, teams, or systems, it focuses on what the business must be able to do. This makes it a stable and strategic method for identifying gaps, especially after completing foundational steps such as defining objectives and creating the capability map.

Organizations typically run capability assessments during major initiatives such as operating model redesign, ERP modernization, post-merger integration, or portfolio planning. The assessment creates a fact-based starting point for capability-based planning and supports the broader work carried out in business architecture.

Early assessments can be performed without linking capabilities to applications or users. This provides a business-first perspective. Deeper insights emerge later when capabilities are connected to systems, roles, and technology, but the assessment itself remains focused on the capability level. 

Poster

Unlock EA value faster with the SAP LeanIX meta model

Optimizing your enterprise architecture begins with a structured overview.

Unlock EA value faster with the SAP LeanIX meta model

What to assess: the core capability-level dimensions

A capability assessment examines the performance and strategic relevance of each capability using a set of core dimensions. These dimensions help teams evaluate capabilities from several angles before linking them to applications or organizational structures.

The scales and definitions below are widely used, but organizations can adapt them based on internal standards or the tools they use.

1. Strategic importance

Strategic importance evaluates how essential a capability is for achieving the organization’s business objectives. It helps distinguish capabilities that simply keep the business running from those that drive competitiveness or future growth. Categorizing importance ensures that assessment results reflect strategic priorities rather than operational opinions.

Scale:

  • Commodity – A basic operational capability required to run the business but not a source of differentiation.
  • Differentiation – A capability that strengthens competitiveness or efficiency compared to peers.
  • Innovation – A capability that enables new value, growth, or business models and requires continuous evolution.

Questions to ask:

  • Does this capability directly support strategic goals?
  • Would weak performance impact customers, revenue, or compliance?
  • Does this capability contribute to long-term innovation or new business models?

2. Current maturity

Current maturity measures how effectively the capability performs today across people, processes, data, and decision-making. It provides a grounded view of operational performance and highlights where the organization might be struggling with consistency, quality, or predictability. Understanding maturity helps identify where foundational improvements are needed before introducing new initiatives.

Scale:

  • Level 1 – Ad hoc – Work is unstructured, reactive, and inconsistent across teams.
  • Level 2 – Repeatable – The capability is performed consistently but relies heavily on individual knowledge.
  • Level 3 – Defined – Processes, roles, and outcomes are documented and standardized.
  • Level 4 – Managed – Performance is measured, monitored, and continuously controlled.
  • Level 5 – Optimized – The capability operates at peak performance with ongoing improvement and strategic innovation.

Questions to ask:

  • Are processes stable and consistently applied across teams?
  • Do people understand how this capability is supposed to operate?
  • Are outcomes predictable, reliable, and aligned with expectations?

3. Target maturity

Target maturity reflects the future performance level the organization aims to reach. It helps determine how much improvement is required and whether the capability needs incremental refinement or a significant redesign. Target maturity also helps teams ensure ambition aligns with strategic priorities and available resources.

Scale:

  • Level 1 – Ad hoc – The organization accepts that this capability only needs basic functionality to support operations.
  • Level 2 – Repeatable – The capability needs reliable, repeatable execution but does not require formalized processes or high performance.
  • Level 3 – Defined – The capability should operate with clear processes, roles, and documentation to support consistency across teams.
  • Level 4 – Managed – The capability must achieve measurable performance and governance standards to meet strategic needs.
  • Level 5 – Optimized – The capability should become a high-performing function with continuous improvement and strategic impact.

Questions to ask:

  • What level of performance is needed to support upcoming strategic priorities?
  • Does this capability need to become a differentiator or simply remain stable?
  • Is the desired future state achievable within the next planning cycle?

4. Lifecycle status

Lifecycle status shows where the capability sits in its evolution, from early planning to phase-out. It helps teams determine whether the capability will grow in importance, remain steady, or decline over time. Lifecycle context prevents over-investing in areas that may soon be retired or under-investing in capabilities that are becoming critical.

Scale:

  • Plan – The capability is being considered or designed but not yet active.
  • Phase in – The capability is being introduced or expanded within the organization.
  • Active – The capability is fully operational and part of standard business operations.
  • Phase out – The capability is being reduced or replaced due to declining relevance.
  • End of life – The capability is no longer needed and is scheduled for retirement.

Questions to ask:

  • Is this capability new, growing, stable, or declining in relevance?
  • Are improvements necessary now or better timed for a future cycle?
  • Should this capability be redesigned, merged, or simplified?

5. AI potential

AI potential identifies where automation, augmentation, or redesign driven by AI could meaningfully improve a capability. It encourages organizations to think about opportunities without requiring detailed technical input. Highlighting AI potential early helps align capability planning with broader digital transformation ambitions.

Scale:

  • No potential – AI does not meaningfully improve this capability.
  • Efficiency improvement – AI can automate routine tasks or streamline basic activities.
  • User interaction enhancement – AI can improve employee or customer interaction quality.
  • Process transformation – AI can fundamentally redesign how the capability operates.

Questions to ask:

  • Could AI automate manual or repetitive activities?
  • Could employees benefit from predictive insights or decision support?
  • Could AI enable new ways of performing this capability?

6. Enterprise domain

Enterprise domain groups capabilities into broader business areas, making it easier to structure assessments and identify patterns. These domains often match how the organization thinks about its value streams or operating model. Reviewing capabilities by domain helps identify uneven performance or areas that require coordinated improvement.

Domains:

  • Corporate – Capabilities supporting internal governance, finance, HR, and shared functions.
  • Customer – Capabilities responsible for experience, engagement, and lifecycle management.
  • Products & services – Capabilities focused on development, delivery, and service management.
  • Supply – Capabilities enabling procurement, logistics, and supply chain operations.

Questions to ask:

  • Which domain does this capability belong to?
  • Does the domain have clear leadership or ownership?
  • Do capabilities within the same domain show similar maturity trends?

7. Quality seal (governance)

The quality seal indicates whether the capability definition is complete, reviewed, and approved. A poor-quality definition makes scoring difficult and introduces inconsistencies across teams. Ensuring governance improves reliability of assessment results and helps maintain a stable capability model over time.

States:

  • Broken – The capability definition is incomplete or contains errors preventing use.
  • Rejected – The capability was reviewed and not approved due to inconsistencies or overlap.
  • Draft – The capability is defined but not yet validated by governance owners.
  • Approved – The capability is validated, governed, and ready for assessment and planning.

Questions to ask:

  • Is the capability clearly defined and unambiguous?
  • Has it been validated by the right stakeholders?
  • Can this capability be confidently used in planning and reporting?

 

How many dimensions should organizations assess?

Organizations do not need to assess every dimension. Teams typically start with strategic importance and current maturity to create a baseline.

Additional dimensions like target maturity, AI potential, or lifecycle can be added when the capability model becomes more mature or when EA tools such as SAP LeanIX make it easier to manage multiple dimensions.

The key is to choose a set of dimensions that supports strategic decision-making without overwhelming stakeholders during early assessments.

How to run a capability assessment?

A capability assessment is most useful when it is simple, structured, and easy for business stakeholders to participate in. The goal is not to create a perfect scorecard, but to gain a shared understanding of how well each capability performs and where attention is needed.

Organizations typically follow a series of practical steps.

1. Prepare strategic inputs

Start with the basics: a clear view of business objectives and a finalized capability map. These two elements help participants anchor their thinking in strategy rather than team structures or personal experience.

Most teams find it helpful to distribute a short pre-read outlining the capability definitions and the dimensions being assessed.

2. Choose how to conduct the assessment

Different organizations run assessments in different ways. The choice depends on how formal the process needs to be and how often assessments will be repeated.

Common approaches include:

  • Spreadsheets when starting small or running a one-off assessment
  • Workshop-style scoring using slides or virtual whiteboards
  • Enterprise architecture tools when teams want consistent scoring, traceability, and automated visualizations

There’s no single right option — what matters is that the format feels accessible for the participants involved.

3. Set scoring criteria

Before scoring begins, teams should align on how to interpret each level and scale. This avoids confusion and ensures that a “Level 3” or an “Innovation” score means the same thing across domains.

Some teams document these definitions formally; others review them briefly at the start of the workshop. The key is a shared understanding, not heavy documentation.

Typical alignment includes:

  • what each maturity level means in practice
  • how strategic importance should be interpreted
  • what lifecycle stages imply
  • when a capability should be considered high or low AI potential

4. Engage the right stakeholders

A well-rounded assessment requires perspectives from people who understand how the capability actually operates. These often include capability owners, domain leaders, or strategy representatives.

Enterprise architects usually act as facilitators — guiding the conversation, clarifying definitions, and ensuring the discussion stays focused on the capability, not the organizational chart or specific systems.

A helpful mix of participants includes:

  • a business owner
  • someone close to operations
  • a representative from strategy or transformation
  • an EA facilitator

5. Facilitate the assessment

Teams review capabilities one by one, starting with the definition. For each capability, participants share observations, examples, and recent experiences before assigning scores.

This step works best when the facilitator keeps the conversation structured but flexible. Short notes explaining why a score was chosen help future reassessments and reduce ambiguity.

A simple flow for each capability:

  • Read the capability definition
  • Discuss performance or challenges
  • Assign scores for the selected dimensions
  • Capture brief supporting notes

6. Validate and finalize the results

After all capabilities are scored, teams review the results as a whole. Sometimes scores need adjustment based on consistency across domains or to reflect updated strategic priorities.

A short consolidation discussion ensures that the final assessment reflects a shared understanding, not isolated opinions. These results then become the foundation for identifying capability gaps, prioritizing improvements, and preparing for roadmap discussions.

 

Visualizing assessment results

Once capabilities are assessed, visualizing the results helps teams interpret patterns quickly and communicate findings clearly. Visuals turn a list of scores into insights that leadership, domain owners, and transformation teams can act on.

Even without linking capabilities to applications or organizational data, several simple views already provide strong decision support.

1. Heat maps

Heat maps are the most common way to present capability assessments. They highlight maturity, importance, or capability gaps using color coding, making it easy to spot strengths and weaknesses across domains.

They work well in both early assessments and recurring planning cycles because they condense a large amount of information into a single view.

Heat maps are especially helpful for:

  • comparing maturity across domains
  • identifying areas that need immediate attention
  • showing alignment (or misalignment) with strategic importance

2. Radar charts

Radar or “spider” charts help teams compare the maturity of capabilities within a specific domain. They are useful when reviewing groups of related capabilities, such as customer-facing capabilities or supply chain functions.

These charts reveal imbalances — for example, when one capability is significantly lagging behind others in the same domain.

3. Domain summaries

Domain-level summaries present capability scores in a simple table or grouped layout. They are particularly helpful for business stakeholders who want a clear view of their area without navigating the entire capability map.

Common elements in a domain summary include:

  • capability name
  • strategic importance
  • current and target maturity
  • lifecycle status
  • a brief comment or justification

4. Lightweight visuals for early-stage teams

Organizations running their first assessment often use simple formats such as:

  • color-coded slides
  • grouped capability cards
  • quadrant diagrams (importance vs. maturity)

These lightweight visuals work well in workshops and leadership reviews before more formal tools are introduced.

Poster

Elevating Business Capabilities With Pace-Layering

A 3-step guide plus best practices on extending Gartner's pace-layering strategy to business capability models.

Elevating Business Capabilities With Pace-Layering

Turning assessment results into action

The purpose of assessing capabilities is to understand where the organization should focus its attention. Once results are validated and visualized, they become a foundation for planning and prioritization.

Even without linking capabilities to applications or organizational data, the assessment highlights where performance gaps exist and where investment or redesign may be needed.

1. Identify capability gaps

Comparing current maturity to strategic importance and target maturity reveals where the organization is underperforming. Capabilities that are highly important but score low in maturity are natural priorities.

These gaps often signal issues such as inconsistent execution, unclear ownership, or outdated ways of working — all of which can be addressed before deeper architectural analysis begins.

2. Prioritize improvement areas

Not every gap requires immediate action. Teams typically focus on capabilities that meet at least one of the following criteria:

  • high strategic importance
  • large gap between current and target maturity
  • lifecycle status that indicates growth or redesign
  • clear alignment with upcoming business objectives

This prioritization ensures that effort and investment flow toward areas with the strongest strategic impact.

3. Connect results to initiative planning

Assessment results help transformation and strategy teams shape early initiatives. A capability with low maturity and high strategic importance may trigger a redesign effort, a process harmonization project, or a discussion about future operating models.

At this stage, organizations do not need detailed application or technology information — they simply need clarity on which capabilities require the most attention.

4. Support roadmap discussions

Capabilities with high gaps often become candidates for future roadmap items. The assessment makes it easier to explain why a certain initiative should be included, how it aligns with objectives, and what the expected value might be.

This provides a structured way for business and IT leaders to discuss priorities, timing, and resource allocation.

5. Prepare for deeper enterprise architecture analysis

While this page focuses on business-level assessments, the results naturally feed into more detailed work such as capability–application alignment, complexity reviews, and roadmap creation.

Once capabilities are linked to applications, users, and IT components, the assessment becomes even more actionable — but the business-level evaluation remains the starting point for all further analysis.

 

Examples of capability assessments

Examples help teams understand how the assessment works in practice. The scenarios below illustrate how capabilities can be evaluated using the dimensions defined earlier, even when detailed application or technology data is not yet available.

Each example highlights typical patterns teams encounter during early assessments.

Example 1: Customer onboarding

Customer onboarding is a common capability that directly influences customer experience and revenue realization. During assessment, teams may find that it is strategically important but not yet standardized across regions.

Assessment snapshot:

  • Strategic importance: Differentiation
  • Current maturity: Level 2 (repeatable) — practices differ across teams, leading to inconsistent experiences
  • Target maturity: Level 4 (managed) — leadership wants predictable, measurable outcomes
  • Lifecycle: Active
  • AI potential: User interaction enhancement — opportunities exist for guided onboarding journeys
  • Domain: Customer
  • Quality seal: Approved

This example often signals the need for harmonized processes or clearer ownership before diving into deeper analysis.

Example 2: Supply planning

Supply planning supports core operations and resilience. Even without application data, teams can evaluate whether the organization is equipped to plan, adjust, and forecast effectively.

Assessment snapshot:

  • Strategic importance: Commodity (stable operations) → moving toward Differentiation for resilience
  • Current maturity: Level 3 (defined) — processes documented but performance varies under stress
  • Target maturity: Level 4 (managed) — stronger monitoring and cross-functional alignment needed
  • Lifecycle: Phase in — capability becoming more critical due to market volatility
  • AI potential: Efficiency improvement
  • Domain: Supply
  • Quality seal: Draft

This example helps teams understand how lifecycle changes can alter strategic relevance over time.

Example 3: Product portfolio management

Product portfolio management helps organizations evaluate which products to invest in, evolve, or retire. It is a capability that often requires strategic clarity and structured decision-making.

Assessment snapshot:

  • Strategic importance: Innovation — capability drives future offerings
  • Current maturity: Level 2 (repeatable) — decisions often depend on individual judgment
  • Target maturity: Level 5 (optimized) — leadership requires data-driven decision processes
  • Lifecycle: Active
  • AI potential: Process transformation — potential for automated insights and modeling
  • Domain: Products & services
  • Quality seal: Approved

This example shows how a capability can have basic execution today but a high strategic ambition for the future.

 

What’s next: move toward discovery or organizational alignment

A capability assessment gives the organization a clear business-level view of performance and strategic relevance.

From here, teams typically move in one of two directions depending on their maturity. Some continue by mapping capabilities to the people and roles who perform them, creating clarity around ownership and responsibilities. Others shift directly into Technology Discovery, where they identify the applications, services, dependencies, and technical components supporting each application.

Both paths are valid. In practice, capability assessment, user-to-capability mapping, and technology discovery form the starting trio for establishing an enterprise architecture practice or a structured application portfolio approach.

Together, they help organizations move from a business-oriented view of capabilities to a deeper understanding of how processes, people, and technology come together to deliver them.

This creates the foundation for later activities such as capability-based planning, investment prioritization, and modernization roadmaps.

Free poster

Best Practices to Define Business Capability Maps

See how IT and business align with a complete overview of your business capability landscape.

  • Whether you are from the banking or insurance industry, automotive or logistics industries or others, this generic business capability map is the perfect start point!
  • See mapping examples and model your own business capabilities!
  • Additionally, we have added tips and best practices on how to get started with business capability maps and to create a complete overview of your business capability landscape.
EN-LP-1P-Best-Practice-Business-Capability-Maps-2

FAQs

What is a business capability assessment?

A business capability assessment evaluates how well each capability in the capability map performs today, how important it is for the organization, and where improvements are needed. It provides a strategic view of strengths, gaps, and priorities before deeper architectural analysis begins.

Capabilities are assessed by scoring dimensions such as strategic importance, current maturity, target maturity, lifecycle status, AI potential, and domain. This can be done through workshops, spreadsheet scoring, or EA tools, depending on the organization’s maturity.

Capability maturity is one dimension within the assessment. It reflects how consistently and effectively a capability operates today. A capability assessment considers maturity alongside other factors such as importance, lifecycle, and future ambition.

Most organizations reassess capabilities annually or during major transformation initiatives. Teams new to capability modeling may run an initial assessment to establish a baseline and refine it as governance matures.

No. Early assessments can be done fully at the business level. Linking capabilities to users, applications, data architecture, and technical architecture objects becomes valuable later, when teams move into technology discovery or organizational alignment.