top of page

The 3 Starting Points That Shape a Skills Strategy

  • Writer: Brian Fieser
    Brian Fieser
  • 2 days ago
  • 12 min read

Cracking the Skills Code, Part 3


In Part 1, I argued that a skills strategy is not fundamentally a technology project. It is an organizational shift.


In Part 2, I argued that most skills strategies do not fail from lack of ambition. They fail from a lack of internal consistency, or coherence, across HR functions.


Part 3 gets more practical.


Because once an organization decides to begin a skills strategy, three foundational questions show up almost immediately:


  1. Do we have a workable role architecture?

  2. How are we going to map skills to roles?

  3. How are we going to identify skills at the employee and candidate level?


In my experience, these are the real starting points.  Not because they solve everything. But because they create the first usable bridge between work, skills, and people.  If those three elements are weak or disconnected, the rest of the strategy stays theoretical.  If they are good enough to begin, the strategy can start to move.


1. Start with a workable role architecture


A skills strategy needs some way to organize work.  That is what role architecture does.  At its simplest, role architecture is the structure an organization uses to define work through job families, roles, profiles, levels, and other related data.


That matters because skills do not create much enterprise value in isolation.  They create value when they are connected to work and people.  That connection is what enables hiring, career navigation, internal mobility, development planning, succession visibility, and workforce planning.


Workable matters more than perfect

Most organizations are not starting with perfect role architecture.  They are often starting with job architectures built for compensation, reporting, or administration. They may have role titles that are not especially meaningful to employees, limited visibility into the architecture, and career paths that still live outside the platform, if at all.


That should not stop the work.


A company does not need to rebuild its entire job architecture before beginning a skills strategy.


What it does need is a workable role architecture:

  • enough structure to group work meaningfully

  • enough differentiation to support skill mapping

  • enough clarity that employees and recruiters can understand specific roles


That is the practical standard.  Not perfection.  But usability.


Why this matters so much downstream

When role architecture is too broad, poorly named, or disconnected from how employees understand work, the downstream experience can often show cracks.


Career recommendations feel generic or completely wrong.

Internal mobility matches feel off.

Recruiters can question the candidate matching recommendations.


And once that happens, trust in the entire strategy starts to erode.  Very often, what looks like an AI, matching or bad recommendation problem is actually grounded in a role architecture problem first.


That is why I think leaders need to hear this clearly:


Do not wait for perfect role architecture before starting your skills journey. But do not underestimate how quickly weak role architecture will show up in certain parts of the employee experience.


Organizations that ultimately feel like the basic content should be enhanced can engage SAP Partners in their Open Skills Ecosystem.  We have had the pleasure of working with many of these world class organizations, from AI start-ups to legacy management consultancies.  One item I feel is true for all of them – they know speed is of the essence and engagements that used to take months and months can now be finished much faster with IP accelerators, consolidated administrator interfaces, and enriched global data sets that are much more easily available today.

 

2. Then map skills to roles


Once there is a workable role architecture, the next required step is skills-to-role mapping. Sometimes this is already included in the role architecture metadata, but unless it has been recent, the skill data is definitely outdated!  This is the first real execution challenge in most skills strategies.  Because now the organization has to decide:


1.     What skill capabilities actually define success in this role? 

2.     How many should we have?

3.     Which of those are core versus adjacent?

4.     What proficiency level matters?

5.     How much of that can be standardized before internal SMEs need to step in?


And the answers to these questions are the first place we start to see incongruence in the HR teams that previously did not need to necessarily align on these issues.  This is often the start of the lack on congruence or alignment I spoke about in Part 2. 


Why this step matters

Skills-to-role mapping is what translates architecture into usefulness.


It gives recruiters a clearer definition of fit.

It gives employees a clearer view of what matters in a target career role.

It gives managers a stronger basis for development conversations.

And it gives the platform something meaningful to match against.


Without it, a company may have a role model and a skills library, but not yet a real skills strategy.


This is where the Open Skills Ecosystem becomes practical

This is also where the Open Skills Ecosystem (OSE) around SAP SuccessFactors can materially change the starting point.


Organizations no longer have to assume that every role-skill relationship must be built manually from scratch.  This is where Blue Crab’s collaboration with firms such as TechWolf, Korn Ferry and other OSE partners become useful. They represent slightly different approaches to the same early-stage challenge.


  • TechWolf’s approach is especially useful when the immediate need is a fast, market-informed first draft of skill-to-role mapping. It uses global labor market data, AI, and a broad skill ontology to infer which skills are most relevant to roles. That gives organizations a more practical starting point than a blank sheet of paper.


  • Korn Ferry’s approach starts from a different place. Through Success Profiles, it brings research-based role expectations into the process. That can be especially valuable for organizations that want to begin with a more structured definition of what success looks like in role, including not only skills, but also broader performance factors.  Korn Ferry’s approach incorporates both role architecture and skills alignment. 


Both can help accelerate one of the hardest early steps in a skills strategy: creating a credible first draft of skills mapped to roles.


And that is often the real goal at the start. Not perfection. Not finality. But a strong enough first draft that the organization can begin validating, refining, and activating.

 

Internal SMEs well versed in specific job families are next

It is inevitable that almost every organization will feel the need to make adjustments to the foundational components provided by their skill partner, and this is where line of business SMEs take a lead role.  This is often a highly collaborative oversight and editing experience between HR and the business job family experts to review, modify and approve the final matching list.  Oh – make sure your SMEs represent different business lines using the same role.  We have seen different business lines generate different skill lists for the same role – and the chaos that ensued. 


Skill mapping is not just a yes-or-no exercise

This is also where I think many organizations underestimate the work. A lot of teams treat skill-to-role mapping as a binary exercise:


Is this skill relevant to the role?

Yes or no.


That is useful, but it is only half the work. The more important question is:


How much mastery of this skill is actually expected in the role?

Most downstream processes and decisions are ultimately based on the expected level of proficiency. It affects:


  • recruiting fit

  • career navigation recommendations

  • mobility eligibility

  • development planning

  • learning recommendations

  • succession discussions on whether someone is ready now, close to ready, or still developing.

  • manager conversations about readiness


Without a defined expectation of proficiency or mastery, the model remains too flat.


A role may require data analysis, coaching, negotiation, stakeholder management, or even SAP configuration — but the real business question is not simply whether those skills matter. It is whether the role requires basic familiarity, working capability, advanced application, or expert-level mastery.


That is why I often tell clients that if they complete only the binary skill-to-role exercise, they have really completed only half of the work. In fact, I would argue that organizations that fail to identify expected proficiency, or mastery level, can only achieve limited business value from their skills strategy outcomes.


This can still be practical

That does not mean every organization needs to define proficiency expectations for every role in the company before launch.  That would be one of the fastest ways to stall the work.


This is where the right OSE partner can help again.


External market data and research-based role content can help create an informed first draft of expected proficiency, just as they can help with the underlying skills mapping. Then internal job family SMEs can bring organizational relevance to that baseline by validating what is truly expected in the company’s own context.


That is often the most practical model:


  1. Start with external intelligence

  2. Add internal validation

  3. Refine for company-specific expectations


And for many organizations, this should be a phased approach.  Start with strategic job families first.  Then expand into additional, high incumbent clusters over time.  That is usually far more realistic than trying to solve everything everywhere at once.


The key is managing the change well.


Organizations do not need every single expected proficiency identified on day one to kick off a skills strategy. But they do need to be clear about what is in scope now, what will come later, and how the model will mature over time.


3. Then identify skills at the employee and candidate level


The third required starting point is employee- and candidate-level skill identification.  This is where a skills strategy has to move beyond roles and into actual people.  Because once an organization has a workable role architecture and a first draft of skills mapped to roles, the next question becomes:


How do we identify the skills that actually exist in our workforce and candidate populations?


This is where the strategy either becomes dynamic and useful, or static and shallow.  In my view, employee-level skill identification cannot rely only on a profile that someone updates once a year and then forgets about it. It needs to be more dynamic and engaging, surfacing skill insights for both the employees and candidates. 


It also cannot rely only on traditional HR data sets.  HRIS, ATS, resumes, certifications, and self-declared skills all matter. But by themselves, they rarely create a rich enough, evolving picture of capability to support high-quality matching over time.


That is why the strongest approaches are built from a broader set of skill signals and enhanced by AI inference.  The question is no longer just, “What skills has this person told us they have?”  The better question is:


What evidence do we see, over time and across systems, that suggests this person has this capability, at this level, in this context?


Why dynamic identification matters

If employee-level skill identification is static, matching quality degrades very quickly.


People learn new things.  They work on new projects.  They complete training.  They demonstrate adjacent capabilities.  They move into new contexts that reveal new strengths.  A skills strategy has to be able to reflect that movement and growth.  Otherwise, the organization ends up matching people to work based on stale, incomplete, or overly narrow pictures of their capability.  A high-fidelity skill profile created at launch through significant focus on employee interaction will dissipate in effectiveness without dynamic approaches for continuous updates.  We have seen this reality time and time again through the years. 


AI inference and skill adjacencies are not optional extras

This is also where AI matters in a very practical way.  Employee-level skill identification should not depend only on exact keyword matches or explicit declarations.  It should also be enhanced through:


  • AI inference — to detect likely skills from multiple signals and patterns

  • Skill adjacencies — to recognize that related capabilities often travel together, or at least can likely be upskilled faster

  • Contextual interpretation — to avoid treating all evidence as equally strong in all situations


This matters because the real workforce does not present itself in neat, standardized language.

One employee may list “data visualization.”  Another may show repeated evidence in Power BI, reporting, and dashboard work.  A third may demonstrate adjacent strengths through analytics projects and stakeholder storytelling.  Whether through the OSE AI skill partners, or emerging native capability within SuccessFactors TIH, this nuance can realistically only be normalized with AI based applications to ensure cleaner insights and related decision making.


The flow of work matters just as much as HR systems

SAP SuccessFactors is continuing to provide roadmap and product releases that provide ongoing skill profile inference through associated platform experiences, including learning completions, internal gigs, performance achievements, mentoring activity, etc.  These all serve an ongoing improvement for creating robust, dynamic skill profiles. 


These are essential “price of admission” capabilities for a skill strategy, but they are also quickly becoming insufficient in and of themselves.  Although most of our current data on skills is generated from traditional HR data sets, the horizon for additional data signals derived from the flow of work is increasingly more important.  These additional flow of work signals create the most robust path to higher fidelity skill inference in the service of regularly updated, dynamic skill profiles – and therefore better downstream match, user experience, engagement and workforce insights.

 

Employee skill validation is where trust is won or lost

Employee skill validation is one of the most pressing and difficult issues in the entire skills strategy.  If the broader strategy is going to be trusted, employee-level skills need a strong measure of both reliability and validity. Reliability means different raters or methods would reach reasonably similar conclusions. Validity means the score is actually telling us something meaningful about real capability.  This is not just about the binary question if the employee possesses the skill, this is more practically about what level of proficiency does the employee possess.  And ultimately can we trust the score?


Without both, the organization may still collect a lot of skills data, but it will struggle to use that data confidently in hiring, mobility, development, and succession decisions.  In practice, organizations often use many formal and informal validation approaches at once.  For heavy technical skills, we may see external testing or structured screening in candidate assessment. In healthcare and other operational environments, we may see observed-behavior checklists completed by trained assessors.  In many companies, we may see additional signals layered in over time: certifications, project work, learning completions, manager observations, and peer input. 


That variety is not a weakness by itself.  In many cases, it is exactly what a modern skills strategy requires.  That said, most organizations still default to the easiest approach:


Self-score plus manager rating.


I understand why. It is simple. It is familiar. It is scalable. And it is relatively easy to launch. But it is also the part of the model I struggle with most to see persist as the primary validation method on its own.

This could not be a SuccessFactors oriented article inclusive of assessment if I did not reference my good colleague, Dr. Steve Hunt.  His guidance is helpful here on related areas of manager assessment. In my significantly shortened summary of a few of his writings, manager input matters, but single-manager evaluations should not be treated as unquestioned truth.  Larger calibration and structured approaches help improve reliability. 


The broader research on supervisory ratings points in a similar direction. The problem is not just potential bias. It is also inconsistency across managers, limited visibility into all aspects of employee capability, and the absence of structured criteria in most real-world settings. 


One of our global oil and gas clients conducts expert review panels for every employee with an “expert” skill rating from their manager for select, mission critical skills.  Of those with initial expert status designation from the manager, only approximately 25% pass the more stringent expert panel criteria to maintain the status.  Not only in this situation, but I would assert more broadly, managers have low reliability in accurately assessing skill proficiency as the ratings approach higher levels of mastery status.


So my recommendation is practical:  Use self-score and manager input as signals, not as the unchallenged source of truth.  Where the business stakes are high, structure should increase. That can mean:


  • skill-specific rubrics

  • behavioral anchors

  • observable evidence

  • expert calibration sessions

  • second-level review

  • and qualified-assessment instruments for selected skill families


The goal is not to make every skill require a formal test before launch. That would stall every program immediately.  The better approach is usually tiered:


  1. Use dynamic inference and multiple skill signals broadly

  2. Use self and manager input as part of the picture

  3. Apply more rigorous validation methods where the stakes are highest


That creates a validation model that is practical enough to launch, but strong enough to earn trust over time.

 

 

Why these three things belong together


These are three separate starting points, but they only work when they connect.


A workable role architecture tells you how work is organized.  Skills-to-role mapping tells you what capability matters in that work. Skill-to-employee identification tells you where that capability actually exists in the workforce and candidate population.


That is the real beginning of a skills strategy. Not the final state.  But the first operating state. And if any one of those three is missing, the rest of the model weakens quickly.


The bottom line


If Part 1 argued that a skills strategy is an organizational shift, and Part 2 argued that internal consistency across HR functions is what keeps it from fragmenting, then Part 3 makes the next point clear:


The first required steps in a skills strategy are a workable role architecture, skills-to-role mapping, and skill-to-employee identification.


Those are the foundational elements on both sides of the equation:

the role/skill side

and the employee/candidate/skill side


That is where the work begins.


Not with a blank sheet of paper.

Not with a perfect future-state design.

But with a workable structure for organizing roles, a credible and efficient first pass at the skills that matter for them, and a dynamic approach to identifying those same skills in people.


That is how companies start building a skills strategy in earnest.  And that is also why this conversation will matter even more over time.  Because the pace of change in work itself — both human and increasingly agentic — is putting new pressure on the very roles, work components, and career architectures we are trying to organize today. The earliest adopters of work ontologies and dynamic work architecture are already starting to wrestle with that issue, many of them after launching skills-focused strategies over the last few years. I am excited to come back to this evolution in Part 6 of this series.


But for now, the message is simpler:


Do not let perfection be the enemy of progress.

Start with a practical role architecture.

Map skills to roles in a way that is good enough to learn from.

Identify skills in people dynamically enough to trust the downstream matching.

Then improve from there.

 
 
 
bottom of page