Assessment & Validation System

AI Trade School

Certified AI Operator (CAIO)

Version 1.0Academic Year 2026–2027Effective August 1, 2026Classification: Institutional Assessment Standards
This document defines the assessment methods, rubric standards, scoring models, pass/fail criteria, remediation policies, and evaluator standards for the CAIO certification program.

Section 1

Assessment Philosophy

The AI Trade School CAIO certification program employs a competency-based assessment model. Credentials are awarded on the basis of demonstrated mastery — not seat hours, content consumption, or time in program. Every assessment instrument answers a single question: can this individual perform this competency to a professional standard under authentic conditions?

Assessment protects three things simultaneously: the value of the credential, employer confidence in certified operators, and the institutional integrity of AI Trade School.

Assessment Principles

AppliedObservableMeasurableRepeatableAlignedTransparentEthical
Applied
Assessment requires application to authentic tasks. Rote memorization of definitions, terminology, or procedures is insufficient to demonstrate competency.
Observable
Competency is demonstrated through work products, performances, and demonstrations. Self-report is not accepted as evidence of competency.
Measurable
All assessments use published criteria, rubrics, and scoring models with defined proficiency levels. No unconstrained subjectivity in evaluation.
Repeatable
Assessment instruments are standardized so that different evaluators, evaluating the same work, reach the same conclusion. Inter-rater reliability is a design priority.
Aligned
Every assessment is aligned to specific competency domains and credential requirements. No assessment exists without a direct connection to certification standards.
Transparent
All rubrics, criteria, and standards are published and available to candidates in advance. No hidden requirements or undisclosed evaluation criteria.
Ethical
Assessment is free from bias, conflict of interest, and arbitrary decision-making. Candidates have the right to fair evaluation and appeals processes.

What Assessment Is Not

The following do not constitute evidence of competency and are not used in certification decisions:

  • Attendance or login records
  • Content consumption metrics (videos watched, pages viewed)
  • Peer ratings or endorsements
  • Self-assessment or self-reported skill levels
  • Forum participation (unless rubric-evaluated)
  • Time spent on platform or in coursework

Certification decisions are made exclusively on the basis of evaluated performance against published standards.

Section 2

Three-Level Assessment Architecture

The CAIO assessment system operates across three sequential levels. Candidates cannot advance to a higher level without completing the prior level in full.

LevelPurposeScopeInstruments
Course-LevelSkill acquisition & knowledge verificationIndividual courses (Tier 1 & Tier 2)Quizzes, demonstrations, rubric-evaluated artifacts
Domain-LevelCompetency validation across domains8 CAIO competency domainsPortfolio review, cross-course evaluation, domain scoring
Certification-LevelCertification readiness & credential awardComplete CAIO candidacyCapstone project, written exam, oral defense

Assessment Flow

  1. 1Complete all Tier 1 courses; pass all knowledge checks and activities.
  2. 2Complete all Tier 2 courses; achieve Proficient or above on all rubric-evaluated artifacts.
  3. 3Portfolio Readiness Review and Domain-Level Validation across all 8 competency domains.
  4. 4Enroll as CAIO candidate upon successful domain validation.
  5. 5Submit capstone project for evaluation.
  6. 6Sit written examination under proctored conditions.
  7. 7Complete oral defense before evaluation panel.
  8. 8Institution reviews all assessment results and makes credential determination.

Section 3

Tier 1 Assessment Standards

Tier 1 courses serve a gatekeeping function. They verify foundational knowledge and basic competency required to enter applied coursework. Tier 1 performance does not contribute to CAIO certification scoring.

CourseAssessment TypePassing Standard
ATS-101: AI FoundationsMultiple-choice knowledge assessment; short-answer reflection75%; reflection complete
ATS-102: ChatGPT FundamentalsPrompt portfolio (3 categories); limitations quiz75% on quiz; portfolio complete
ATS-103: AI Tools OverviewTool Evaluation Matrix (5 tools); knowledge assessment75%; matrix complete
ATS-104: AI ProductivityProductivity Audit with metrics; knowledge assessment75%; audit complete
ATS-105: AI Careers & EthicsEthical Case Analysis (2 scenarios); Career Plan; assessment75%; both artifacts complete

Grading Model

All Tier 1 courses are graded Pass / Not Yet Passed. There is no partial credit. Knowledge assessments permit up to 3 retakes with a 48-hour waiting period between attempts. Activity-based assessments permit 1 revision. Upon exhaustion of all attempts, candidates must wait 60 days and re-enroll in the course.

Section 4

Tier 2 Assessment Standards

Tier 2 courses assess applied competence through portfolio artifacts evaluated on a standardized 4-level rubric. These artifacts form the foundation of the candidate's professional portfolio and contribute directly to domain-level validation.

Unified Rubric Scale

LevelLabelDescription
4DistinguishedProduction-ready, exemplary work. Exceeds professional standards. Could serve as a model for others.
3ProficientMeets professional standards. Competent, reliable, suitable for professional use. Minimum for course completion.
2DevelopingApproaches but does not meet professional standards. Gaps present affecting reliability. Requires revision.
1DeficientDoes not approach professional standards. Incomplete or fundamentally incorrect. Substantial work required.

Course Pass Rule: Every criterion on every rubric must achieve Level 3 (Proficient) or above. There is no averaging. A score of 4 on one criterion does not compensate for a score of 2 on another. Artifacts scoring below Level 3 on any criterion are returned for revision.

Candidates are permitted 2 revisions per course. Revisions must be substantive — cosmetic changes without addressing evaluator feedback do not qualify. Failure to achieve Proficient after 2 revisions requires a 30-day waiting period followed by retaking the full course.

Section 5

Course-Level Rubrics

The following rubrics define the evaluation criteria for each Tier 2 course. Each artifact is evaluated across 5 criteria on the 4-level proficiency scale. All criteria must achieve Level 3 (Proficient) or above for course completion.

ATS-201: Prompt Engineering for Business Operations

Artifact: Organizational Prompt Library (20+ reusable prompts)

Criterion1 — Deficient2 — Developing3 — Proficient4 — Distinguished
Prompt Structure & DesignVague, unstructured, missing elementsBasic structure, inconsistentClear structure with role/task/format/constraints; consistent methodologyHighly optimized; chaining, conditional logic, multi-step
Output ReliabilityUnpredictable, unreliableSome consistency, frequent variationConsistent, controlled; tone/length/format reliableProduction-grade with tested edge cases; QA documented
Iteration & Version ControlNo refinement evidenceMinimal iteration, no trackingLogical refinement documented; version history maintainedStrategic optimization with A/B testing; comprehensive versioning
Documentation QualityIncomplete or missingBasic descriptions, unclear instructionsClear instructions, expected outputs, limitations, audienceProfessional-grade; team deployment ready; troubleshooting guides
Organizational ApplicabilityNot professionally relevantLimited applicability, genericBusiness-ready, aligned to specific functionsEnterprise-grade; organized by department; governance guidelines

ATS-202: AI Automation & Workflow Design

Artifact: AI Workflow Blueprint

Criterion1 — Deficient2 — Developing3 — Proficient4 — Distinguished
Process MappingNo clear map; disorganizedBasic flow; missing decision pointsComplete with inputs, outputs, decisions, data flows, bottlenecksComprehensive with optimization analysis; current vs. improved
Automation LogicBroken or absentPartial; triggers/actions incompleteFunctional trigger-action-condition with branching and sequencingRobust; parallel processing; dynamic conditions
Error HandlingNoneMinimal; happy-path onlyAppropriate fallback logic and human escalationComprehensive error taxonomy; graceful degradation
Testing & ValidationNo testing evidenceLimited; ideal conditions onlyTested normal and edge-case; documentedSystematic plan including adversarial; regression testing
Documentation & HandoffIncomplete or unusableBasic; gaps in maintenanceClear; enables third-party maintenanceProfessional ops manual with runbooks and SLA definitions

ATS-203: AI for Business Operations

Artifact: AI Operations Plan

Criterion1 — Deficient2 — Developing3 — Proficient4 — Distinguished
Readiness AssessmentNone or superficialBasic; missing key dimensionsThorough: infrastructure, workforce, data, cultureComprehensive maturity scoring with gap analysis
Solution AlignmentDoesn't address needSome connection; vagueClear alignment with specific, measurable objectivesStrategic alignment with organizational roadmap
Implementation PlanningNone or unrealisticBasic timeline onlyPhased: timeline, resources, budget, risksDetailed with milestones, gates, contingencies, change management
ROI & ImpactNo metricsGeneric; no baselineAppropriate metrics with baseline and methodologyComprehensive framework; leading/lagging indicators
Stakeholder CommunicationNoneExcessive jargon or inaccurateClear, accurate for non-technical decision-makersExecutive-grade with visualization and recommendations

ATS-204: AI Content & Communication Systems

Artifact: AI Content System Blueprint

Criterion1 — Deficient2 — Developing3 — Proficient4 — Distinguished
Content StrategyNo coherent strategyBasic; no channel/audience specificsMulti-channel, audience-aligned, brand-voicedSophisticated with calendar, KPIs, optimization cycle
Prompt TemplatesVague or unusableBasic; inconsistent outputReliable brand-aligned templates across formatsAdvanced with tone calibration and audience variants
QA SystemNoneBasic; no structured criteriaDocumented workflow with criteria, gates, revisionMulti-layer QA with automated checks and quality metrics
Editorial WorkflowNoneInformal; unclear rolesStructured with roles, handoffs, timelines, escalationProfessional editorial ops with version control and audit
GovernanceNoneMinimal; key policies missingComprehensive: disclosure, attribution, data handling, acceptable useEnterprise framework with compliance monitoring and incident response

ATS-205: AI Tool Integration & System Design

Artifact: AI System Architecture Document

Criterion1 — Deficient2 — Developing3 — Proficient4 — Distinguished
Architecture DesignNo coherent architectureBasic; unclear integrationLogical, cohesive with documented rationaleElegant, scalable with migration paths
Tool SelectionNo rationaleBasic; missing criteriaDocumented evaluation: interoperability, cost, securityStrategic with TCO modeling and vendor risk
Data FlowUnclear or brokenBasic; missing dependenciesComplete with dependencies, transformations, integrity checksComprehensive with lineage, validation, monitoring
Security & RiskNoneMinimal attentionDocumented: access control, data handling, vulnerabilityComprehensive with threat modeling and incident procedures
Maintenance & AuditNoneBasic notes onlyDocumented schedule, audit procedures, update protocolsProfessional ops with dashboards, SLA tracking, capacity planning

Section 6

Domain Validation

Domain validation bridges course-level and certification-level assessment. It evaluates whether a candidate has achieved competency across each of the 8 CAIO competency domains using evidence drawn from multiple courses and artifacts.

Domain Scoring Model

ScoreLabelDescription
MMasteryIndependent, professional-level competency. Could operate autonomously. Consistently Distinguished performance.
CCompetentSafe, reliable competency. Sufficient knowledge, skill, and judgment for professional practice. Consistently Proficient or above. Minimum for certification.
RNeeds RemediationGaps present. Insufficient evidence for unsupervised practice. Targeted remediation required before proceeding.
IInsufficientFundamental gaps across multiple competency areas. Must retake relevant course(s).

Passing: Competent (C) or Mastery (M) on ALL eight domains. A score of R or I on any domain blocks advancement to certification-level assessment.

Portfolio Readiness Review

  1. 1Verify all required artifacts are present and scored Proficient or above on all rubric criteria.
  2. 2Evaluate aggregate evidence for each of the 8 competency domains across all submitted artifacts.
  3. 3Assign M, C, R, or I to each domain based on the totality of evidence.
  4. 4Document rationale for each domain score with specific evidence citations.
  5. 5Communicate results to the candidate in writing with actionable feedback.

Remediation

Candidates scoring R (Needs Remediation) on any domain receive specific feedback identifying competency gaps and targeted remediation activities. Upon completion, the domain is reevaluated. Candidates scoring I (Insufficient) must retake the relevant course(s) in full.

Section 7

Capstone Assessment

The capstone project is the centerpiece of the CAIO certification assessment. It requires candidates to integrate all 8 competency domains into a single comprehensive project demonstrating professional-level AI operations competency. The capstone carries a weight of 40% in the final certification determination.

Required Deliverables

  1. Organizational Context Analysis
  2. AI System Design & Architecture
  3. Implementation Documentation
  4. Standard Operating Procedures (minimum 2)
  5. Ethical & Risk Assessment
  6. Impact Evaluation
  7. Stakeholder Communication Package

A missing deliverable results in automatic return of the capstone without evaluation. All 7 deliverables must be present before the capstone enters the evaluation queue.

Capstone projects are evaluated by a minimum of 2 independent evaluators using the rubric defined in Appendix A. All 9 rubric dimensions must achieve Level 3 (Proficient) or above. One revision is permitted. Failure to achieve Proficient after revision requires a 90-day waiting period followed by submission of a new capstone project.

Section 8

Written Examination

Question TypeCountPurpose
Multiple-Choice60Breadth of knowledge; one correct answer, three distractors
Short-Answer20Application to specific scenarios; scored 0–2
Scenario-Based20Complex reasoning, multi-domain integration, professional judgment; scored 0–4

Specifications: 100 questions total, 3-hour time limit, 80% passing threshold, minimum 70% per domain, remotely proctored, multiple exam versions drawn from a secured item bank. The written examination carries a weight of 30% in the final certification determination.

Domain Weight Distribution

DomainWeightApprox. Questions
1. AI Literacy10–12%10–12
2. Prompt Engineering12–15%12–15
3. Workflow & Automation12–15%12–15
4. System Integration10–12%10–12
5. Business Application12–15%12–15
6. Ethics & Governance15–18%15–18
7. Documentation & SOPs8–10%8–10
8. Deployment & Monitoring10–12%10–12

Domain 6 (Ethics & Governance) carries intentionally elevated weight. Ethical reasoning is a core professional competency for certified AI operators and is weighted accordingly.

One retake is permitted with a 30-day waiting period. The retake uses a different exam version. A second failure requires a 6-month waiting period followed by re-enrollment in the certification program.

Section 9

Oral Defense

ComponentDurationFocus
Capstone Presentation20 minutesClarity, thoroughness, professionalism, logical structure
Panel Questions20 minutesDepth of understanding, rationale for decisions, alternatives considered, critical thinking under challenge
Ethical Scenario10 minutesNovel scenario requiring framework application and actionable recommendation

The oral defense panel consists of a minimum of 2 qualified reviewers. Both panelists must independently rate the candidate as passing. In the event of a split decision, a third panelist is brought in to make the determination. The defense is conducted via live video, recorded with candidate consent. The oral defense carries a weight of 30% in the final certification determination.

Fail Conditions

  • Candidate cannot explain their own system design or implementation decisions
  • Candidate misrepresents AI capabilities or limitations
  • Candidate demonstrates ethical blind spots on direct questioning
  • Candidate cannot engage constructively with critical challenge or feedback
  • Any dimension on the defense rubric scores below Level 3

One reattempt is permitted with a 30-day waiting period. The candidate must submit a written response addressing evaluator feedback before the reattempt. A second failure requires a 6-month waiting period and re-enrollment in the certification program.

Section 10

Unified Proficiency Scale

The following 4-point proficiency scale is applied identically across all rubric-evaluated assessments: Tier 2 course rubrics, the Capstone Rubric (Appendix A), and the Oral Defense Rubric (Appendix B). All evaluators are calibrated to this scale.

LevelLabelDescription
4DistinguishedExceeds professional standards. Exceptional depth, originality, and quality. Could serve as a model for other candidates.
3ProficientMeets professional standards. Competent, reliable work suitable for professional use. Minimum for course completion and CAIO certification.
2DevelopingApproaches but does not meet professional standards. Gaps present that affect reliability. Requires revision.
1DeficientDoes not approach professional standards. Fundamental gaps in understanding or execution. Substantial work required.

Section 11

Pass / Fail Logic

Certification Awarded When ALL Conditions Are Met

  1. All 5 Tier 1 courses passed
  2. All 5 Tier 2 courses passed with all rubric criteria at Level 3 or above
  3. All 8 competency domains scored at Competent (C) or Mastery (M)
  4. Capstone project scored Level 3 or above on all 9 dimensions
  5. Written examination score of 80% or above overall, with no domain below 70%
  6. Oral defense scored Level 3 or above on all 6 dimensions by both panelists

Certification Withheld If Any of the Following Apply

  • Any Tier 1 or Tier 2 course not passed
  • Any Tier 2 rubric criterion below Level 3
  • Any competency domain scored R or I (unremediated)
  • Any capstone dimension below Level 3
  • Written exam score below 80% overall or any domain below 70%
  • Any oral defense dimension below Level 3

No compensatory scoring at any level. A Level 4 on System Design does not compensate for a Level 2 on Ethics. A 95% exam score does not compensate for a failed oral defense. Every competency matters independently.

Weight Summary

ComponentWeightPassing Standard
Portfolio / Domain ReviewPrerequisite GateC or M on all 8 domains
Capstone Project40%Level 3+ on all 9 dimensions
Written Examination30%80% overall; 70% per domain
Oral Defense30%Level 3+ on all 6 dimensions

Section 12

Remediation & Reassessment

AssessmentRemediationWait PeriodAfter Exhaustion
Tier 1 Knowledge Assessments3 retakes48 hours60-day wait; re-enroll in course
Tier 1 Activities1 revisionNone60-day wait; re-enroll in course
Tier 2 Artifacts2 revisionsNone30-day wait; retake course
Domain ValidationTargeted remediationAs prescribedRetake relevant course(s)
Capstone Project1 revisionNone90-day wait; submit new capstone
Written Examination1 retake30 days6-month wait; re-enroll
Oral Defense1 reattempt30 days6-month wait; re-enroll

All revisions must be substantive. Cosmetic changes without addressing evaluator feedback do not qualify as a substantive revision. Candidates who have exhausted standard remediation opportunities may petition for additional attempts; petitions are reviewed on a case-by-case basis and are not guaranteed.

Section 13

Evaluator Standards

Qualifications

  • Minimum 3 years of relevant professional experience
  • Demonstrated domain competence in assigned evaluation areas
  • Completed institutional Evaluator Training program
  • Familiar with all rubrics, scoring models, and assessment frameworks
  • No conflicts of interest with candidates under evaluation

Responsibilities

Evaluators are responsible for applying rubrics consistently and without bias, documenting all scoring decisions with evidence-based rationale, providing actionable feedback that guides candidate improvement, participating in regular calibration sessions, reporting any concerns regarding candidate integrity or assessment instrument validity, and maintaining strict confidentiality of all candidate materials and results.

Calibration

All evaluators undergo an onboarding training program that includes scoring benchmark artifacts at each proficiency level. Quarterly calibration sessions ensure ongoing alignment. Annual inter-rater reliability analysis targets a Cohen's kappa of 0.80 or above. Evaluators whose scores consistently deviate from calibrated standards receive additional training or are removed from the evaluation panel.

Section 14

Assessment Integrity

Academic Integrity

All submitted work must be the candidate's own. AI tools may be used as instructed within the curriculum, but submissions must demonstrate the candidate's personal understanding, judgment, and professional competence. The following constitute violations:

  • Submitting AI-generated work without meaningful personal contribution
  • Plagiarism of any form, including uncredited use of others' work
  • Fabrication or falsification of data, results, or evidence
  • Unauthorized collaboration on individually assessed work
  • Sharing exam content, questions, or answers with other candidates

Exam Security

Written examinations are remotely proctored with identity verification, screen monitoring, and session recording. Multiple exam versions are drawn from a secured item bank to prevent content leakage. Any violation of exam security protocols results in immediate termination of the exam session and initiation of disciplinary proceedings.

Capstone Originality

Capstone projects are reviewed for evidence of genuine intellectual engagement. Submissions that appear AI-generated without meaningful candidate contribution are flagged for additional review and may be rejected. Consequences for integrity violations range from assignment failure to program dismissal and credential revocation, depending on severity.

Section 15

Records & Audit Trail

The following records are maintained for every candidate who enters the CAIO certification program: portfolio artifacts, capstone projects, completed rubric evaluation sheets, domain scoring records, examination results, oral defense recordings, remediation and appeals records, and final certification determinations.

Retention Schedule

Record TypeRetention Period
Credential and certification recordsIndefinite
Portfolio and capstone materials7 years
Rubric evaluation sheets and evaluator notes7 years
Examination results and item responses7 years
Oral defense recordings5 years
Remediation and appeals records7 years

All records are encrypted and access-controlled. Candidates may access their own records at any time. No third-party access to candidate records is permitted without explicit written consent from the candidate.

Section 16

Governance & Revision

  • Annual review of all assessment instruments, rubrics, and scoring models
  • Comprehensive review every 2 years incorporating external stakeholder input and industry alignment analysis
  • Triggered review in response to significant developments in AI technology, regulation, or professional practice
  • Version control: candidates are assessed under the version of the assessment system in effect at the time of their enrollment; all prior versions are archived for a minimum of 7 years

Continuous Improvement

The assessment system is continuously improved through analysis of: evaluator feedback and calibration data, candidate performance patterns, pass/fail rate analysis by assessment component, item-level analysis for written examinations, rubric reliability metrics, and stakeholder feedback from employers and industry partners.

Appendix A

Capstone Project Rubric

All 9 dimensions are evaluated on the 4-point proficiency scale. The candidate must achieve Level 3 (Proficient) or above on ALL 9 dimensions. No compensatory scoring.

Dimension1 — Deficient2 — Developing3 — Proficient4 — Distinguished
Organizational ContextSuperficial; no clear need identifiedBasic context; missing key dimensionsThorough analysis with clear opportunity and constraints identifiedExceptional depth; maturity assessment; multi-stakeholder analysis
System DesignDisorganized; no coherent designBasic design; unclear integration pointsLogical, cohesive system with documented design rationaleElegant, scalable architecture with migration paths
Prompt EngineeringWeak prompts; unreliable outputsInconsistent results; limited documentationControlled, reliable, documented, and tested promptsOptimized with systematic testing methodology
Workflow & AutomationBroken or absent automationPartial automation; happy-path onlyFunctional automation with error handling and fallbackRobust with adversarial testing; graceful degradation
Implementation DocsIncomplete; unusable by othersBasic documentation; significant gapsClear documentation; enables third-party replicationProfessional ops manual with runbooks
Ethics & RiskMissing or token treatmentMinimal; key risks not addressedResponsible treatment with mitigation and oversight plansExemplary with proactive identification and governance framework
Impact EvaluationNo metrics definedGeneric metrics; no methodologyAppropriate metrics with methodology and honest analysisComprehensive framework; limitation acknowledgment
Stakeholder CommunicationAbsent or incomprehensibleOverly technical or inaccurateClear, accurate communication for non-technical audienceExecutive-grade with visualization and recommendations
Cross-Domain IntegrationComponents disconnected; no coherenceLoosely connected; siloed thinkingCoherent integration across domains; professional presentationSeamless systems thinking; operational maturity

Appendix B

Oral Defense Rubric

All 6 dimensions are evaluated on the 4-point proficiency scale. Both panelists must independently rate the candidate at Level 3 (Proficient) or above on all dimensions.

Dimension1 — Deficient2 — Developing3 — Proficient4 — Distinguished
Presentation ClarityDisorganized; unclear communicationBasic organization; some clarity issuesWell-organized, clear, professional presentationCompelling; executive-level communication
Technical DepthFundamental inaccuracies presentSurface-level understanding; gaps in knowledgeAccurate understanding with demonstrated depthExceptional depth; nuanced trade-off analysis
Response to QuestionsCannot respond meaningfully; defensivePartial responses; difficulty with challengeThoughtful responses; engages constructively with challengeExceptional critical thinking; offers novel insight
Decision RationaleCannot explain design or implementation choicesBasic explanations; limited trade-off awarenessClear defense of decisions with trade-off analysisSophisticated reasoning with awareness of alternatives
Ethical ReasoningFails to identify obvious ethical risksIdentifies some risks; weak analysisSystematic framework application; practical recommendationsExemplary; proactive identification; nuanced judgment
Communication AbilityJargon-heavy; cannot adapt to audienceSome audience adaptation; inconsistentAdapts communication to audience; clear; professionalNatural audience adaptation; inspires confidence

Appendix C

Domain Validation Checklist

Evaluators use the following checklist during the Portfolio Readiness Review. Assign M (Mastery), C (Competent), R (Needs Remediation), or I (Insufficient) for each domain based on aggregate evidence.

#DomainPrimary Evidence SourcesScore
1AI LiteracyCapstone context analysis; cross-artifact evidence____
2Prompt EngineeringATS-201 Prompt Library; capstone prompts____
3Workflow & AutomationATS-202 Workflow Blueprint; capstone automation____
4System IntegrationATS-205 System Architecture; capstone design____
5Business ApplicationATS-203 Operations Plan; capstone context/impact____
6Ethics & GovernanceATS-105 Ethics Analysis; ATS-203 risk assessment; capstone ethics____
7DocumentationQuality across all artifacts; ATS-202 handoff; capstone SOPs____
8Deployment & MonitoringCapstone deployment/monitoring plan; ATS-202 maintenance____

Appendix D

Certification Decision Flowchart

The following decision flowchart summarizes the sequential gates a candidate must pass to receive the CAIO credential.

1

Course-Level Gate

Question: All Tier 1 and Tier 2 courses passed, all rubric criteria Level 3+?

YES → Proceed to Step 2

NO → Must complete or remediate. Cannot proceed.

2

Domain Validation Gate

Question: C or M on all eight competency domains?

YES → Eligible for candidacy. Proceed to Step 3.

NO → Domain remediation required.

3

Capstone Evaluation

Question: Level 3+ on all nine capstone dimensions?

YES → Capstone accepted. Proceed to Step 4.

NO → Returned for revision per remediation policy.

4

Written Examination

Question: 80%+ overall, no domain below 70%?

YES → Examination passed. Proceed to Step 5.

NO → Retake per remediation policy.

5

Oral Defense

Question: Level 3+ on all six dimensions from both panelists?

YES → Defense passed. Proceed to Step 6.

NO → Reattempt per remediation policy.

6

Certification Determination

Question: All five gates passed?

YES: CAIO credential awarded. Digital certificate issued with unique verification number.

NO: Credential withheld. Candidate follows the applicable remediation pathway.

AI Trade School — Assessment & Validation System — Version 1.0 — Academic Year 2026–2027

End of Document