CARA WOJCIK
Workplace Competency Demonstration
Workflow redesign for competency-based skill evaluation in advanced manufacturing apprenticeships.
ROLE
Senior UX/Instructional Designer Workflow Redesign, User Research, Prototype Design
FOCUS
Competency-driven workflows, Evidence capture, Skill evaluation
DOMAIN
Advanced manufacturing training
Workforce competencies
TOOLS
Prototype: Sketch, Marvel (interactive prototype Architecture: FigJam
SUMMARY
Redesigned the workflow trainers use to create portfolio tasks in an advanced manufacturing pre-apprenticeship platform
Reorganized task design around workforce competencies rather than open-ended prompts
Guided trainers to define competency goals before writing task instructions
Structured learner submissions to capture clearer, evaluable evidence of workplace skills through multimedia
Built an interactive prototype to test the revised workflow with trainers
OVERVIEW
Apprenticeship portfolios for demonstrating workplace skills
IMTfolio is a multimedia portfolio platform used in Industrial Manufacturing Technician (IMT) pre-apprenticeships to document evidence of workplace skills. Instead of relying solely on written exams, learners document real-world demonstrations of their skills through photos, videos, and short written reflections.
The platform organizes these submissions through trainer-defined tasks (called “challenges” in the system). Each task asks learners to perform a workplace activity—such as operating equipment, following a safety procedure, or completing a fabrication step—and submit evidence showing how they completed it.
The resulting portfolio provides employers with a structured record of a learner’s abilities and readiness for skilled manufacturing work.
This redesign was developed during a pilot with WRTP | BIG STEP, a Milwaukee-based workforce training organization that prepares participants for careers in advanced manufacturing.
EXISTING WORKFLOW
Post-hoc Competency Selection
In the original system, trainers created portfolio tasks (called “challenges” in the platform) using a short authoring form designed for speed and simplicity. Each task was intended to prompt learners to demonstrate and capture evidence of multiple required competencies.
The authoring workflow followed this sequence:
Prompt → Activity → Competency
Trainers typically:
Selected a category
Wrote a task title and instructions
Attached media or references
Added competency tags at the end of the process
Because competencies were applied after prompts were written, they often functioned as descriptive metadata rather than targeted requirements.
As a result:
Many tasks were overly broad
Prompts did not reliably elicit evidence the intended skill
Learner submissions varied widely in quality
Reviewing tasks created during the early pilot confirmed that learners were not consistently connecting their work to the competencies the portfolio was intended to demonstrate.
The problem was not trainer expertise, but the system’s structure. The workflow did not ensure that competencies shaped the design of each task.
Legacy Task Authoring Workflow

Because competencies were introduced at the end of the workflow, they did not shape task design or the evidence learners produced.
SYSTEM REDESIGN
Competency-driven Task Authoring
The redesign reframes task creation as a goal-oriented workflow for producing evaluable competency evidence.
Instead of beginning with an open-ended prompt, the workflow begins by selecting the competencies the task is intended to demonstrate.
Redesigned sequence:
Competency → Goal → Activity
Competencies define the structure of the task, guiding how goals are set, prompts are written, and evidence is produced.
The revised authoring workflow:
Begins with selecting the competencies learners should demonstrate
Defines a goal aligned to those competencies
Generates prompts that require observable evidence of skill
Supports tasks with media examples and instructions
By moving competency selection to the start of the workflow, each task is structured around a specific skill outcome.
This shifts competencies from descriptive tags to the organizing structure of the system, ensuring each task produces observable, evaluable evidence of skill.
Contextual guidance and examples are embedded directly within the authoring flow, supporting trainers without interrupting the task.
Revised Task Authoring Workflow

The workflow shifts from prompt-first authoring to a competency-first sequence, ensuring that goals, prompts, and evidence are defined in relation to specific skills.
DESIGN VALIDATION
Comparison: Legacy vs. Redesigned Task
To evaluate the redesigned workflow, trainers recreated previously authored tasks using the revised authoring form. This enabled a direct comparison between tasks created under the original and redesigned workflows.
The legacy task uses a broad prompt focused on recalling information from a video. Competencies are applied afterward and do not shape how the task is structured.
In the revised version, competencies are selected first and the task is built around a defined goal. The prompt requires learners to apply their knowledge in a specific scenario and demonstrate their process, producing clearer and more evaluable evidence of skill.
Legacy Task: Prompt-first Authoring)

Short, fast workflow prioritized speed but provided insufficient support and context for trainers
Revised Task: Competency-driven Authoring

Longer, structured workflow provided guidance to help trainers create clearer, more effective tasks
OUTCOMES
Improving the quality and reliability of competency evidence
The redesigned workflow improves how tasks are authored, resulting in clearer, more consistent, and more evaluable evidence of workforce skills.
Tasks are structured around specific skill outcomes
Trainers begin by selecting the competencies a task should demonstrate
Tasks are anchored to specific skills rather than general topics
Instructions are written with a clear outcome in mind
Tasks created using the redesigned workflow are more focused and aligned to the competencies they are intended to demonstrate.
Trainers translate expertise into measurable tasks
Trainers define goals aligned to selected competencies
Tasks clarify what successful performance looks like
Prompts elicit observable evidence of that performance
Because trainers are industry professionals rather than trained educators, the system plays a critical role in helping them translate subject matter expertise into structured, evaluable tasks.
Learner submissions reflect applied skill, not recall
Prompts require learners to demonstrate how they perform a skill
Tasks emphasize process and application over explanation
Evidence is captured through structured responses and multimedia
This increases the likelihood that portfolios reflect real-world performance rather than recall of information.
Task quality becomes more consistent across trainers
• Guidance is embedded directly within the authoring workflow
• Trainers receive support at the point of task creation
• Tasks follow a more consistent structure across the program
By supporting trainers during authoring, the system reduces variability and improves the overall quality of tasks and resulting learner evidence.
CAPABILITIES
What this work demonstrates
Workflow design for competency-based evaluation systems
Structuring inputs to produce consistent, measurable outputs
Supporting subject matter experts in authoring evaluable tasks
Applying instructional design and Universal Design for Learning within product systems
Designing for evidence quality and assessment reliability