Assessment & Validation System
AI Trade School
Certified AI Operator (CAIO)
Contents
Core Framework
1. Assessment Philosophy2. Three-Level Model3. Tier 1 Standards4. Tier 2 Standards5. Course-Level Rubrics6. Domain Validation7. Capstone Assessment8. Written ExaminationStandards & Appendices
9. Oral Defense10. Unified Proficiency Scale11. Pass / Fail Logic12. Remediation & Reassessment13. Evaluator Standards14. Assessment Integrity15. Records & Audit Trail16. Governance & RevisionAppendices
A: Capstone RubricB: Oral Defense RubricC: Domain ChecklistD: Decision FlowchartSection 1
Assessment Philosophy
The AI Trade School CAIO certification program employs a competency-based assessment model. Credentials are awarded on the basis of demonstrated mastery — not seat hours, content consumption, or time in program. Every assessment instrument answers a single question: can this individual perform this competency to a professional standard under authentic conditions?
Assessment protects three things simultaneously: the value of the credential, employer confidence in certified operators, and the institutional integrity of AI Trade School.
Assessment Principles
- Applied
- Assessment requires application to authentic tasks. Rote memorization of definitions, terminology, or procedures is insufficient to demonstrate competency.
- Observable
- Competency is demonstrated through work products, performances, and demonstrations. Self-report is not accepted as evidence of competency.
- Measurable
- All assessments use published criteria, rubrics, and scoring models with defined proficiency levels. No unconstrained subjectivity in evaluation.
- Repeatable
- Assessment instruments are standardized so that different evaluators, evaluating the same work, reach the same conclusion. Inter-rater reliability is a design priority.
- Aligned
- Every assessment is aligned to specific competency domains and credential requirements. No assessment exists without a direct connection to certification standards.
- Transparent
- All rubrics, criteria, and standards are published and available to candidates in advance. No hidden requirements or undisclosed evaluation criteria.
- Ethical
- Assessment is free from bias, conflict of interest, and arbitrary decision-making. Candidates have the right to fair evaluation and appeals processes.
What Assessment Is Not
The following do not constitute evidence of competency and are not used in certification decisions:
- Attendance or login records
- Content consumption metrics (videos watched, pages viewed)
- Peer ratings or endorsements
- Self-assessment or self-reported skill levels
- Forum participation (unless rubric-evaluated)
- Time spent on platform or in coursework
Certification decisions are made exclusively on the basis of evaluated performance against published standards.
Section 2
Three-Level Assessment Architecture
The CAIO assessment system operates across three sequential levels. Candidates cannot advance to a higher level without completing the prior level in full.
| Level | Purpose | Scope | Instruments |
|---|---|---|---|
| Course-Level | Skill acquisition & knowledge verification | Individual courses (Tier 1 & Tier 2) | Quizzes, demonstrations, rubric-evaluated artifacts |
| Domain-Level | Competency validation across domains | 8 CAIO competency domains | Portfolio review, cross-course evaluation, domain scoring |
| Certification-Level | Certification readiness & credential award | Complete CAIO candidacy | Capstone project, written exam, oral defense |
Assessment Flow
- 1Complete all Tier 1 courses; pass all knowledge checks and activities.
- 2Complete all Tier 2 courses; achieve Proficient or above on all rubric-evaluated artifacts.
- 3Portfolio Readiness Review and Domain-Level Validation across all 8 competency domains.
- 4Enroll as CAIO candidate upon successful domain validation.
- 5Submit capstone project for evaluation.
- 6Sit written examination under proctored conditions.
- 7Complete oral defense before evaluation panel.
- 8Institution reviews all assessment results and makes credential determination.
Section 3
Tier 1 Assessment Standards
Tier 1 courses serve a gatekeeping function. They verify foundational knowledge and basic competency required to enter applied coursework. Tier 1 performance does not contribute to CAIO certification scoring.
| Course | Assessment Type | Passing Standard |
|---|---|---|
| ATS-101: AI Foundations | Multiple-choice knowledge assessment; short-answer reflection | 75%; reflection complete |
| ATS-102: ChatGPT Fundamentals | Prompt portfolio (3 categories); limitations quiz | 75% on quiz; portfolio complete |
| ATS-103: AI Tools Overview | Tool Evaluation Matrix (5 tools); knowledge assessment | 75%; matrix complete |
| ATS-104: AI Productivity | Productivity Audit with metrics; knowledge assessment | 75%; audit complete |
| ATS-105: AI Careers & Ethics | Ethical Case Analysis (2 scenarios); Career Plan; assessment | 75%; both artifacts complete |
Grading Model
All Tier 1 courses are graded Pass / Not Yet Passed. There is no partial credit. Knowledge assessments permit up to 3 retakes with a 48-hour waiting period between attempts. Activity-based assessments permit 1 revision. Upon exhaustion of all attempts, candidates must wait 60 days and re-enroll in the course.
Section 4
Tier 2 Assessment Standards
Tier 2 courses assess applied competence through portfolio artifacts evaluated on a standardized 4-level rubric. These artifacts form the foundation of the candidate's professional portfolio and contribute directly to domain-level validation.
Unified Rubric Scale
| Level | Label | Description |
|---|---|---|
| 4 | Distinguished | Production-ready, exemplary work. Exceeds professional standards. Could serve as a model for others. |
| 3 | Proficient | Meets professional standards. Competent, reliable, suitable for professional use. Minimum for course completion. |
| 2 | Developing | Approaches but does not meet professional standards. Gaps present affecting reliability. Requires revision. |
| 1 | Deficient | Does not approach professional standards. Incomplete or fundamentally incorrect. Substantial work required. |
Course Pass Rule: Every criterion on every rubric must achieve Level 3 (Proficient) or above. There is no averaging. A score of 4 on one criterion does not compensate for a score of 2 on another. Artifacts scoring below Level 3 on any criterion are returned for revision.
Candidates are permitted 2 revisions per course. Revisions must be substantive — cosmetic changes without addressing evaluator feedback do not qualify. Failure to achieve Proficient after 2 revisions requires a 30-day waiting period followed by retaking the full course.
Section 5
Course-Level Rubrics
The following rubrics define the evaluation criteria for each Tier 2 course. Each artifact is evaluated across 5 criteria on the 4-level proficiency scale. All criteria must achieve Level 3 (Proficient) or above for course completion.
ATS-201: Prompt Engineering for Business Operations
Artifact: Organizational Prompt Library (20+ reusable prompts)
| Criterion | 1 — Deficient | 2 — Developing | 3 — Proficient | 4 — Distinguished |
|---|---|---|---|---|
| Prompt Structure & Design | Vague, unstructured, missing elements | Basic structure, inconsistent | Clear structure with role/task/format/constraints; consistent methodology | Highly optimized; chaining, conditional logic, multi-step |
| Output Reliability | Unpredictable, unreliable | Some consistency, frequent variation | Consistent, controlled; tone/length/format reliable | Production-grade with tested edge cases; QA documented |
| Iteration & Version Control | No refinement evidence | Minimal iteration, no tracking | Logical refinement documented; version history maintained | Strategic optimization with A/B testing; comprehensive versioning |
| Documentation Quality | Incomplete or missing | Basic descriptions, unclear instructions | Clear instructions, expected outputs, limitations, audience | Professional-grade; team deployment ready; troubleshooting guides |
| Organizational Applicability | Not professionally relevant | Limited applicability, generic | Business-ready, aligned to specific functions | Enterprise-grade; organized by department; governance guidelines |
ATS-202: AI Automation & Workflow Design
Artifact: AI Workflow Blueprint
| Criterion | 1 — Deficient | 2 — Developing | 3 — Proficient | 4 — Distinguished |
|---|---|---|---|---|
| Process Mapping | No clear map; disorganized | Basic flow; missing decision points | Complete with inputs, outputs, decisions, data flows, bottlenecks | Comprehensive with optimization analysis; current vs. improved |
| Automation Logic | Broken or absent | Partial; triggers/actions incomplete | Functional trigger-action-condition with branching and sequencing | Robust; parallel processing; dynamic conditions |
| Error Handling | None | Minimal; happy-path only | Appropriate fallback logic and human escalation | Comprehensive error taxonomy; graceful degradation |
| Testing & Validation | No testing evidence | Limited; ideal conditions only | Tested normal and edge-case; documented | Systematic plan including adversarial; regression testing |
| Documentation & Handoff | Incomplete or unusable | Basic; gaps in maintenance | Clear; enables third-party maintenance | Professional ops manual with runbooks and SLA definitions |
ATS-203: AI for Business Operations
Artifact: AI Operations Plan
| Criterion | 1 — Deficient | 2 — Developing | 3 — Proficient | 4 — Distinguished |
|---|---|---|---|---|
| Readiness Assessment | None or superficial | Basic; missing key dimensions | Thorough: infrastructure, workforce, data, culture | Comprehensive maturity scoring with gap analysis |
| Solution Alignment | Doesn't address need | Some connection; vague | Clear alignment with specific, measurable objectives | Strategic alignment with organizational roadmap |
| Implementation Planning | None or unrealistic | Basic timeline only | Phased: timeline, resources, budget, risks | Detailed with milestones, gates, contingencies, change management |
| ROI & Impact | No metrics | Generic; no baseline | Appropriate metrics with baseline and methodology | Comprehensive framework; leading/lagging indicators |
| Stakeholder Communication | None | Excessive jargon or inaccurate | Clear, accurate for non-technical decision-makers | Executive-grade with visualization and recommendations |
ATS-204: AI Content & Communication Systems
Artifact: AI Content System Blueprint
| Criterion | 1 — Deficient | 2 — Developing | 3 — Proficient | 4 — Distinguished |
|---|---|---|---|---|
| Content Strategy | No coherent strategy | Basic; no channel/audience specifics | Multi-channel, audience-aligned, brand-voiced | Sophisticated with calendar, KPIs, optimization cycle |
| Prompt Templates | Vague or unusable | Basic; inconsistent output | Reliable brand-aligned templates across formats | Advanced with tone calibration and audience variants |
| QA System | None | Basic; no structured criteria | Documented workflow with criteria, gates, revision | Multi-layer QA with automated checks and quality metrics |
| Editorial Workflow | None | Informal; unclear roles | Structured with roles, handoffs, timelines, escalation | Professional editorial ops with version control and audit |
| Governance | None | Minimal; key policies missing | Comprehensive: disclosure, attribution, data handling, acceptable use | Enterprise framework with compliance monitoring and incident response |
ATS-205: AI Tool Integration & System Design
Artifact: AI System Architecture Document
| Criterion | 1 — Deficient | 2 — Developing | 3 — Proficient | 4 — Distinguished |
|---|---|---|---|---|
| Architecture Design | No coherent architecture | Basic; unclear integration | Logical, cohesive with documented rationale | Elegant, scalable with migration paths |
| Tool Selection | No rationale | Basic; missing criteria | Documented evaluation: interoperability, cost, security | Strategic with TCO modeling and vendor risk |
| Data Flow | Unclear or broken | Basic; missing dependencies | Complete with dependencies, transformations, integrity checks | Comprehensive with lineage, validation, monitoring |
| Security & Risk | None | Minimal attention | Documented: access control, data handling, vulnerability | Comprehensive with threat modeling and incident procedures |
| Maintenance & Audit | None | Basic notes only | Documented schedule, audit procedures, update protocols | Professional ops with dashboards, SLA tracking, capacity planning |
Section 6
Domain Validation
Domain validation bridges course-level and certification-level assessment. It evaluates whether a candidate has achieved competency across each of the 8 CAIO competency domains using evidence drawn from multiple courses and artifacts.
Domain Scoring Model
| Score | Label | Description |
|---|---|---|
| M | Mastery | Independent, professional-level competency. Could operate autonomously. Consistently Distinguished performance. |
| C | Competent | Safe, reliable competency. Sufficient knowledge, skill, and judgment for professional practice. Consistently Proficient or above. Minimum for certification. |
| R | Needs Remediation | Gaps present. Insufficient evidence for unsupervised practice. Targeted remediation required before proceeding. |
| I | Insufficient | Fundamental gaps across multiple competency areas. Must retake relevant course(s). |
Passing: Competent (C) or Mastery (M) on ALL eight domains. A score of R or I on any domain blocks advancement to certification-level assessment.
Portfolio Readiness Review
- 1Verify all required artifacts are present and scored Proficient or above on all rubric criteria.
- 2Evaluate aggregate evidence for each of the 8 competency domains across all submitted artifacts.
- 3Assign M, C, R, or I to each domain based on the totality of evidence.
- 4Document rationale for each domain score with specific evidence citations.
- 5Communicate results to the candidate in writing with actionable feedback.
Remediation
Candidates scoring R (Needs Remediation) on any domain receive specific feedback identifying competency gaps and targeted remediation activities. Upon completion, the domain is reevaluated. Candidates scoring I (Insufficient) must retake the relevant course(s) in full.
Section 7
Capstone Assessment
The capstone project is the centerpiece of the CAIO certification assessment. It requires candidates to integrate all 8 competency domains into a single comprehensive project demonstrating professional-level AI operations competency. The capstone carries a weight of 40% in the final certification determination.
Required Deliverables
- Organizational Context Analysis
- AI System Design & Architecture
- Implementation Documentation
- Standard Operating Procedures (minimum 2)
- Ethical & Risk Assessment
- Impact Evaluation
- Stakeholder Communication Package
A missing deliverable results in automatic return of the capstone without evaluation. All 7 deliverables must be present before the capstone enters the evaluation queue.
Capstone projects are evaluated by a minimum of 2 independent evaluators using the rubric defined in Appendix A. All 9 rubric dimensions must achieve Level 3 (Proficient) or above. One revision is permitted. Failure to achieve Proficient after revision requires a 90-day waiting period followed by submission of a new capstone project.
Section 8
Written Examination
| Question Type | Count | Purpose |
|---|---|---|
| Multiple-Choice | 60 | Breadth of knowledge; one correct answer, three distractors |
| Short-Answer | 20 | Application to specific scenarios; scored 0–2 |
| Scenario-Based | 20 | Complex reasoning, multi-domain integration, professional judgment; scored 0–4 |
Specifications: 100 questions total, 3-hour time limit, 80% passing threshold, minimum 70% per domain, remotely proctored, multiple exam versions drawn from a secured item bank. The written examination carries a weight of 30% in the final certification determination.
Domain Weight Distribution
| Domain | Weight | Approx. Questions |
|---|---|---|
| 1. AI Literacy | 10–12% | 10–12 |
| 2. Prompt Engineering | 12–15% | 12–15 |
| 3. Workflow & Automation | 12–15% | 12–15 |
| 4. System Integration | 10–12% | 10–12 |
| 5. Business Application | 12–15% | 12–15 |
| 6. Ethics & Governance | 15–18% | 15–18 |
| 7. Documentation & SOPs | 8–10% | 8–10 |
| 8. Deployment & Monitoring | 10–12% | 10–12 |
Domain 6 (Ethics & Governance) carries intentionally elevated weight. Ethical reasoning is a core professional competency for certified AI operators and is weighted accordingly.
One retake is permitted with a 30-day waiting period. The retake uses a different exam version. A second failure requires a 6-month waiting period followed by re-enrollment in the certification program.
Section 9
Oral Defense
| Component | Duration | Focus |
|---|---|---|
| Capstone Presentation | 20 minutes | Clarity, thoroughness, professionalism, logical structure |
| Panel Questions | 20 minutes | Depth of understanding, rationale for decisions, alternatives considered, critical thinking under challenge |
| Ethical Scenario | 10 minutes | Novel scenario requiring framework application and actionable recommendation |
The oral defense panel consists of a minimum of 2 qualified reviewers. Both panelists must independently rate the candidate as passing. In the event of a split decision, a third panelist is brought in to make the determination. The defense is conducted via live video, recorded with candidate consent. The oral defense carries a weight of 30% in the final certification determination.
Fail Conditions
- Candidate cannot explain their own system design or implementation decisions
- Candidate misrepresents AI capabilities or limitations
- Candidate demonstrates ethical blind spots on direct questioning
- Candidate cannot engage constructively with critical challenge or feedback
- Any dimension on the defense rubric scores below Level 3
One reattempt is permitted with a 30-day waiting period. The candidate must submit a written response addressing evaluator feedback before the reattempt. A second failure requires a 6-month waiting period and re-enrollment in the certification program.
Section 10
Unified Proficiency Scale
The following 4-point proficiency scale is applied identically across all rubric-evaluated assessments: Tier 2 course rubrics, the Capstone Rubric (Appendix A), and the Oral Defense Rubric (Appendix B). All evaluators are calibrated to this scale.
| Level | Label | Description |
|---|---|---|
| 4 | Distinguished | Exceeds professional standards. Exceptional depth, originality, and quality. Could serve as a model for other candidates. |
| 3 | Proficient | Meets professional standards. Competent, reliable work suitable for professional use. Minimum for course completion and CAIO certification. |
| 2 | Developing | Approaches but does not meet professional standards. Gaps present that affect reliability. Requires revision. |
| 1 | Deficient | Does not approach professional standards. Fundamental gaps in understanding or execution. Substantial work required. |
Section 11
Pass / Fail Logic
Certification Awarded When ALL Conditions Are Met
- All 5 Tier 1 courses passed
- All 5 Tier 2 courses passed with all rubric criteria at Level 3 or above
- All 8 competency domains scored at Competent (C) or Mastery (M)
- Capstone project scored Level 3 or above on all 9 dimensions
- Written examination score of 80% or above overall, with no domain below 70%
- Oral defense scored Level 3 or above on all 6 dimensions by both panelists
Certification Withheld If Any of the Following Apply
- Any Tier 1 or Tier 2 course not passed
- Any Tier 2 rubric criterion below Level 3
- Any competency domain scored R or I (unremediated)
- Any capstone dimension below Level 3
- Written exam score below 80% overall or any domain below 70%
- Any oral defense dimension below Level 3
No compensatory scoring at any level. A Level 4 on System Design does not compensate for a Level 2 on Ethics. A 95% exam score does not compensate for a failed oral defense. Every competency matters independently.
Weight Summary
| Component | Weight | Passing Standard |
|---|---|---|
| Portfolio / Domain Review | Prerequisite Gate | C or M on all 8 domains |
| Capstone Project | 40% | Level 3+ on all 9 dimensions |
| Written Examination | 30% | 80% overall; 70% per domain |
| Oral Defense | 30% | Level 3+ on all 6 dimensions |
Section 12
Remediation & Reassessment
| Assessment | Remediation | Wait Period | After Exhaustion |
|---|---|---|---|
| Tier 1 Knowledge Assessments | 3 retakes | 48 hours | 60-day wait; re-enroll in course |
| Tier 1 Activities | 1 revision | None | 60-day wait; re-enroll in course |
| Tier 2 Artifacts | 2 revisions | None | 30-day wait; retake course |
| Domain Validation | Targeted remediation | As prescribed | Retake relevant course(s) |
| Capstone Project | 1 revision | None | 90-day wait; submit new capstone |
| Written Examination | 1 retake | 30 days | 6-month wait; re-enroll |
| Oral Defense | 1 reattempt | 30 days | 6-month wait; re-enroll |
All revisions must be substantive. Cosmetic changes without addressing evaluator feedback do not qualify as a substantive revision. Candidates who have exhausted standard remediation opportunities may petition for additional attempts; petitions are reviewed on a case-by-case basis and are not guaranteed.
Section 13
Evaluator Standards
Qualifications
- Minimum 3 years of relevant professional experience
- Demonstrated domain competence in assigned evaluation areas
- Completed institutional Evaluator Training program
- Familiar with all rubrics, scoring models, and assessment frameworks
- No conflicts of interest with candidates under evaluation
Responsibilities
Evaluators are responsible for applying rubrics consistently and without bias, documenting all scoring decisions with evidence-based rationale, providing actionable feedback that guides candidate improvement, participating in regular calibration sessions, reporting any concerns regarding candidate integrity or assessment instrument validity, and maintaining strict confidentiality of all candidate materials and results.
Calibration
All evaluators undergo an onboarding training program that includes scoring benchmark artifacts at each proficiency level. Quarterly calibration sessions ensure ongoing alignment. Annual inter-rater reliability analysis targets a Cohen's kappa of 0.80 or above. Evaluators whose scores consistently deviate from calibrated standards receive additional training or are removed from the evaluation panel.
Section 14
Assessment Integrity
Academic Integrity
All submitted work must be the candidate's own. AI tools may be used as instructed within the curriculum, but submissions must demonstrate the candidate's personal understanding, judgment, and professional competence. The following constitute violations:
- Submitting AI-generated work without meaningful personal contribution
- Plagiarism of any form, including uncredited use of others' work
- Fabrication or falsification of data, results, or evidence
- Unauthorized collaboration on individually assessed work
- Sharing exam content, questions, or answers with other candidates
Exam Security
Written examinations are remotely proctored with identity verification, screen monitoring, and session recording. Multiple exam versions are drawn from a secured item bank to prevent content leakage. Any violation of exam security protocols results in immediate termination of the exam session and initiation of disciplinary proceedings.
Capstone Originality
Capstone projects are reviewed for evidence of genuine intellectual engagement. Submissions that appear AI-generated without meaningful candidate contribution are flagged for additional review and may be rejected. Consequences for integrity violations range from assignment failure to program dismissal and credential revocation, depending on severity.
Section 15
Records & Audit Trail
The following records are maintained for every candidate who enters the CAIO certification program: portfolio artifacts, capstone projects, completed rubric evaluation sheets, domain scoring records, examination results, oral defense recordings, remediation and appeals records, and final certification determinations.
Retention Schedule
| Record Type | Retention Period |
|---|---|
| Credential and certification records | Indefinite |
| Portfolio and capstone materials | 7 years |
| Rubric evaluation sheets and evaluator notes | 7 years |
| Examination results and item responses | 7 years |
| Oral defense recordings | 5 years |
| Remediation and appeals records | 7 years |
All records are encrypted and access-controlled. Candidates may access their own records at any time. No third-party access to candidate records is permitted without explicit written consent from the candidate.
Section 16
Governance & Revision
- Annual review of all assessment instruments, rubrics, and scoring models
- Comprehensive review every 2 years incorporating external stakeholder input and industry alignment analysis
- Triggered review in response to significant developments in AI technology, regulation, or professional practice
- Version control: candidates are assessed under the version of the assessment system in effect at the time of their enrollment; all prior versions are archived for a minimum of 7 years
Continuous Improvement
The assessment system is continuously improved through analysis of: evaluator feedback and calibration data, candidate performance patterns, pass/fail rate analysis by assessment component, item-level analysis for written examinations, rubric reliability metrics, and stakeholder feedback from employers and industry partners.
Appendix A
Capstone Project Rubric
All 9 dimensions are evaluated on the 4-point proficiency scale. The candidate must achieve Level 3 (Proficient) or above on ALL 9 dimensions. No compensatory scoring.
| Dimension | 1 — Deficient | 2 — Developing | 3 — Proficient | 4 — Distinguished |
|---|---|---|---|---|
| Organizational Context | Superficial; no clear need identified | Basic context; missing key dimensions | Thorough analysis with clear opportunity and constraints identified | Exceptional depth; maturity assessment; multi-stakeholder analysis |
| System Design | Disorganized; no coherent design | Basic design; unclear integration points | Logical, cohesive system with documented design rationale | Elegant, scalable architecture with migration paths |
| Prompt Engineering | Weak prompts; unreliable outputs | Inconsistent results; limited documentation | Controlled, reliable, documented, and tested prompts | Optimized with systematic testing methodology |
| Workflow & Automation | Broken or absent automation | Partial automation; happy-path only | Functional automation with error handling and fallback | Robust with adversarial testing; graceful degradation |
| Implementation Docs | Incomplete; unusable by others | Basic documentation; significant gaps | Clear documentation; enables third-party replication | Professional ops manual with runbooks |
| Ethics & Risk | Missing or token treatment | Minimal; key risks not addressed | Responsible treatment with mitigation and oversight plans | Exemplary with proactive identification and governance framework |
| Impact Evaluation | No metrics defined | Generic metrics; no methodology | Appropriate metrics with methodology and honest analysis | Comprehensive framework; limitation acknowledgment |
| Stakeholder Communication | Absent or incomprehensible | Overly technical or inaccurate | Clear, accurate communication for non-technical audience | Executive-grade with visualization and recommendations |
| Cross-Domain Integration | Components disconnected; no coherence | Loosely connected; siloed thinking | Coherent integration across domains; professional presentation | Seamless systems thinking; operational maturity |
Appendix B
Oral Defense Rubric
All 6 dimensions are evaluated on the 4-point proficiency scale. Both panelists must independently rate the candidate at Level 3 (Proficient) or above on all dimensions.
| Dimension | 1 — Deficient | 2 — Developing | 3 — Proficient | 4 — Distinguished |
|---|---|---|---|---|
| Presentation Clarity | Disorganized; unclear communication | Basic organization; some clarity issues | Well-organized, clear, professional presentation | Compelling; executive-level communication |
| Technical Depth | Fundamental inaccuracies present | Surface-level understanding; gaps in knowledge | Accurate understanding with demonstrated depth | Exceptional depth; nuanced trade-off analysis |
| Response to Questions | Cannot respond meaningfully; defensive | Partial responses; difficulty with challenge | Thoughtful responses; engages constructively with challenge | Exceptional critical thinking; offers novel insight |
| Decision Rationale | Cannot explain design or implementation choices | Basic explanations; limited trade-off awareness | Clear defense of decisions with trade-off analysis | Sophisticated reasoning with awareness of alternatives |
| Ethical Reasoning | Fails to identify obvious ethical risks | Identifies some risks; weak analysis | Systematic framework application; practical recommendations | Exemplary; proactive identification; nuanced judgment |
| Communication Ability | Jargon-heavy; cannot adapt to audience | Some audience adaptation; inconsistent | Adapts communication to audience; clear; professional | Natural audience adaptation; inspires confidence |
Appendix C
Domain Validation Checklist
Evaluators use the following checklist during the Portfolio Readiness Review. Assign M (Mastery), C (Competent), R (Needs Remediation), or I (Insufficient) for each domain based on aggregate evidence.
| # | Domain | Primary Evidence Sources | Score |
|---|---|---|---|
| 1 | AI Literacy | Capstone context analysis; cross-artifact evidence | ____ |
| 2 | Prompt Engineering | ATS-201 Prompt Library; capstone prompts | ____ |
| 3 | Workflow & Automation | ATS-202 Workflow Blueprint; capstone automation | ____ |
| 4 | System Integration | ATS-205 System Architecture; capstone design | ____ |
| 5 | Business Application | ATS-203 Operations Plan; capstone context/impact | ____ |
| 6 | Ethics & Governance | ATS-105 Ethics Analysis; ATS-203 risk assessment; capstone ethics | ____ |
| 7 | Documentation | Quality across all artifacts; ATS-202 handoff; capstone SOPs | ____ |
| 8 | Deployment & Monitoring | Capstone deployment/monitoring plan; ATS-202 maintenance | ____ |
Appendix D
Certification Decision Flowchart
The following decision flowchart summarizes the sequential gates a candidate must pass to receive the CAIO credential.
Course-Level Gate
Question: All Tier 1 and Tier 2 courses passed, all rubric criteria Level 3+?
YES → Proceed to Step 2
NO → Must complete or remediate. Cannot proceed.
Domain Validation Gate
Question: C or M on all eight competency domains?
YES → Eligible for candidacy. Proceed to Step 3.
NO → Domain remediation required.
Capstone Evaluation
Question: Level 3+ on all nine capstone dimensions?
YES → Capstone accepted. Proceed to Step 4.
NO → Returned for revision per remediation policy.
Written Examination
Question: 80%+ overall, no domain below 70%?
YES → Examination passed. Proceed to Step 5.
NO → Retake per remediation policy.
Oral Defense
Question: Level 3+ on all six dimensions from both panelists?
YES → Defense passed. Proceed to Step 6.
NO → Reattempt per remediation policy.
Certification Determination
Question: All five gates passed?
YES: CAIO credential awarded. Digital certificate issued with unique verification number.
NO: Credential withheld. Candidate follows the applicable remediation pathway.
AI Trade School — Assessment & Validation System — Version 1.0 — Academic Year 2026–2027
End of Document