this is the final delivable 1 .. now based on the ...
Creato il: 8 maggio 2026
Risposto usando GPT-5.5 Thinking da Chat01
Creato il: 8 maggio 2026
Risposto usando GPT-5.5 Thinking da Chat01
this is the final delivable 1 .. now based on the given context and delivable 1 with authentic references and inunique way give the full delivable 2"Contents
Aim and requirements
1.1. Individual Work (mandatory)
Deliverable 1 — Homework 05&06 - Academic Paper (DOCX)
Article Name
Paper Table of Contents (use these headings)
2.1. Introduction (what to include)
2.2. Literature Review (scope and expectations)
2.3. Methodology (what you must describe)
2.3.1. Review method
2.3.2. Baseline taxonomy analysis method
2.3.3. Taxonomy development method
2.3.4. Mandatory articulation perspectives
2.3.5. Additional required perspectives
2.3.6. Comparison tables (minimum 4)
2.4. Analysis of Hernan Huwyler’s AI System Threat Vector Taxonomy (design you must include)
2.4.1. Conceptual overview
2.4.2. Strengths
2.4.3. Limitations and gaps
2.4.4. Transitional justification
2.5. Proposed Novel AI System Threat Vector Taxonomy (what you must present)
2.5.1. Taxonomy design requirements
2.5.2. Mandatory design expectation
2.5.3. Visual representation
2.5.4. Minimum expectations
2.6. Comparative Analysis, Results and Evaluation (what you must report)
2.6.1. Minimum requirements
2.6.2. Findings
2.7. Discussion
2.8. Conclusion and Future Work
2.9. Appendices
2.10. References
Deliverable 2 — Homework 07&08 - Discussion Paper (DOCX)
Discussion Paper Title
Front Matter
Title Page
Abstract
Table of Contents
3.1. Introduction
3.1.1. Problem Statement or Context
3.1.2. Significance of the Study
3.1.3. Thesis Statement
3.2. Theoretical Framework or Methodology
3.2.1. Analytical Approach
3.2.2. Data Sources / Materials
3.3. Body: Detailed Analysis
3.3.1. Major Component A — Application of the Taxonomy to OpenClaw
3.3.2. Major Component B — Application of the Taxonomy to NemoClaw
3.3.3. Synthesis of Findings
3.4. Discussion
3.4.1. Implications of the Analysis
3.4.2. Limitations
3.5. Conclusion
3.5.1. Restatement of Thesis and Key Findings
3.5.2. Recommendations or Future Research Directions
3.5.3. Back Matter
Additional Instructor Notes
Submission rules, marking, integrity, and writing standards
4.1. Submission format and file requirements
4.2. Submission package and naming
4.3. Homework marking rubric (100 points)
4.4. Academic integrity and authorship responsibility
4.5. Use of AI tools and mandatory declaration
4.6. Writing, references, and overall document quality
Aim and requirements
This assignment is designed to develop students’ ability to analyse AI systems through a structured governance, risk, assurance, and threat taxonomy lens. The overall assignment is divided into two linked deliverables. In Deliverable 1, students will develop an academically grounded novel AI System Threat Vector Taxonomy by critically examining existing literature and extending Hernan Huwyler’s baseline taxonomy through multiple perspectives. In Deliverable 2, students will apply that taxonomy analytically to OpenClaw and NemoClaw in the form of a discussion paper.
The assignment therefore requires both taxonomy construction and taxonomy application. It is intended to strengthen conceptual understanding of AI systems as socio-technical systems, while also improving analytical writing, comparison, synthesis, academic structuring, and evidence-based argumentation.
Students are expected to demonstrate that AI system threats cannot be understood only through isolated technical vulnerabilities. Instead, the assignment requires students to consider AI threats through several interrelated perspectives, including CAIMS, CAIMOM, AI lifecycle stages, Bowtie logic, third-party dependencies, and further governance, operational, human, data, and infrastructure perspectives where appropriate.
A strong submission will do all of the following:
show a clear understanding of AI systems as governance-relevant socio-technical systems;
distinguish between a baseline taxonomy and a proposed improved taxonomy;
explain and justify classification choices clearly;
compare frameworks and perspectives analytically rather than descriptively;
and apply the developed taxonomy coherently to real AI system cases.
1.1. Individual Work (mandatory)
This is a STRICTLY INDIVIDUAL assignment.
You must prepare and submit all deliverables individually.
No group collaboration, shared drafting, or peer co-authoring is permitted.
General classroom discussion of public course topics may occur during lessons; however, the actual homework content, structure, writing, tables, analysis, and conclusions must be your own individual work.
Deliverable 1 — Homework 05&06 - Academic Paper (DOCX)
You will submit one DOCX document for this homework, and it must be written in the IEEE Access template.
Article Name
A Novel AI System Threat Vector Taxonomy: A Research-Based Extension of Hernan Huwyler’s AI System Threat Vector Taxonomy
Paper Table of Contents (use these headings)
You must structure your paper using the following Table of Contents, and you may only adjust it if you provide a short justification in the Methodology section.
Introduction
Literature Review
Methodology
Analysis of Hernan Huwyler’s AI System Threat Vector Taxonomy
Proposed Novel AI System Threat Vector Taxonomy
Comparative Analysis, Results and Evaluation
Discussion
Conclusion and Future Work
Appendices
References
2.1. Introduction (what to include)
Explain, at a high level, what an AI system is and why AI-related risks must be analysed from a system-level perspective rather than only from the perspective of a standalone model. You should explain why AI systems create complex threat surfaces across data, models, infrastructure, interfaces, human actors, governance mechanisms, and operational workflows. In addition, you should explain why structured threat taxonomies matter for AI governance, AI assurance, auditing, risk management, and system security.
You should also state the goals of your study, and you should briefly preview:
(i) the conceptual and analytical basis of your taxonomy development process, and
(ii) the original threat vector taxonomy you propose as the main contribution of the paper.
You must also state explicitly that your taxonomy development will be articulated through multiple analytical perspectives, and these must include CAIMS, CAIMOM, AI lifecycle stages, and Bowtie-based risk logic as mandatory lenses. The textbook positions CAIMS as a unifying socio-technical governance architecture and CAIMOM as an operational model organised into staged AI lifecycle flows, which makes both directly relevant to the structuring of AI system threat vectors.
2.2. Literature Review (scope and expectations)
Your review must cover the following areas, and it should compare ideas, assumptions, and classification approaches rather than simply listing frameworks or authors:
AI systems as socio-technical systems, including interacting layers such as data, models, pipelines, deployment environments, interfaces, users, and governance structures.
Threats to AI systems, including at least:
data poisoning and data integrity risks,
adversarial manipulation and evasion,
model extraction, inversion, and privacy-related threats,
infrastructure and deployment vulnerabilities,
insider and human-factor risks,
third-party and supply-chain dependencies,
governance, compliance, accountability, and oversight failures.
AI security and assurance frameworks, including academic and institutional discussions of risk classification, AI attack surfaces, system reliability, trustworthiness, and assurance methods.
Taxonomy design in research, including why taxonomies are used, what makes a taxonomy useful, and how categorisation logic affects interpretability, completeness, and practical applicability.
Emerging and advanced AI contexts, including generative AI, multimodal AI, autonomous systems, and agentic AI, where relevant to system-level threat expansion.
CAIMS and CAIMOM as structuring references, including the role of CAIMS as a unifying governance architecture and CAIMOM as an operational lifecycle model with staged AI system progression. The textbook explicitly presents CAIMOM through stages such as AI strategy and governance, architecture and data engineering, model training and development, build/evaluation/transition, inference operations and support, and continuous model refinement.
Lifecycle-based risk thinking, including how threat vectors differ across design, development, deployment, operation, refinement, retirement, and ecosystem interaction.
Bowtie analysis and causal risk visualisation, including the logic of threat sources, top events, preventative barriers, consequences, and recovery barriers, because the textbook explicitly treats Bowtie as a systems-oriented method for mapping high-consequence algorithmic risk.
Third-party governance and external dependency risk, including suppliers, API providers, cloud services, data vendors, model providers, and outsourced AI components, which are also reflected in the textbook’s discussion of third-party relationships and supply-chain control.
You must use academic sources, and you should prefer primary sources for major conceptual or technical ideas. You should also use high-quality institutional or industry reports where appropriate, particularly when addressing operational AI risk, governance, or assurance. Referencing must remain consistent throughout the paper.
2.3. Methodology (what you must describe)
You must describe your method clearly, and you must make it sufficiently structured and reproducible.
2.3.1. Review method
You must explain:
your search strategy, including databases, repositories, standards bodies, institutional sources, and keywords used;
your inclusion and exclusion criteria;
and your screening and synthesis approach.
2.3.2. Baseline taxonomy analysis method
You must describe how Hernan Huwyler’s AI System Threat Vector Taxonomy was analysed as the conceptual baseline. You should explain:
how its categories were interpreted;
how its organisational logic was examined;
and how its strengths and limitations were assessed.
2.3.3. Taxonomy development method
You must describe:
how your taxonomy dimensions and categories were selected;
why those dimensions matter;
how you derived sub-categories;
and how you refined the final structure.
2.3.4. Mandatory articulation perspectives
Your taxonomy development must explicitly use the following perspectives:
CAIMS perspective
You must explain how CAIMS helps frame the AI system as a socio-technical governance architecture rather than a purely technical artefact.
CAIMOM perspective
You must explain how threat vectors are mapped across CAIMOM’s staged operating model, because the textbook presents CAIMOM as a lifecycle-oriented operating structure for AI systems.
AI lifecycle perspective
You must classify or discuss threats according to lifecycle stages, not merely static categories.
Bowtie perspective
You must explain how Bowtie logic is used to analyse at least selected high-impact threat vectors in terms of threat sources, top events, preventative barriers, consequences, and recovery or mitigation barriers. The textbook explicitly treats Bowtie as a method for mapping causal pathways and cascading failures.
2.3.5. Additional required perspectives
You must add at least three or four further perspectives, such as the following:
Third-party / supply-chain perspective
Threats emerging from vendors, service providers, foundation model providers, API dependencies, external datasets, and outsourced components. The textbook’s treatment of third-party relationships makes this especially relevant.
Human and organisational actor perspective
Threats linked to users, insiders, developers, operators, governance owners, and decision-makers.
Data provenance and lineage perspective
Threats linked to source integrity, training data quality, ownership, licensing, lineage, and traceability.
Governance and accountability perspective
Threats linked to unclear control ownership, weak oversight, ineffective policy integration, missing evidence trails, and poor auditability.
You may additionally use:
Infrastructure and platform perspective
Interested parties / stakeholder perspective
Control and assurance perspective
Regulatory and jurisdictional perspective
If you adapt the required Table of Contents, you must justify that choice briefly in this section.
2.3.6. Comparison tables (minimum 4)
You must include at least four comparison tables, and each table must have a clear title, defined criteria, and explicit sources.
You can use the following themes, and you may refine them or propose your own if you justify your choice:
Threat source comparison
Compare internal, external, accidental, malicious, human, technical, organisational, and third-party threat origins.
AI system layer comparison
Compare threat vectors across data, model, infrastructure, interface, operational process, governance, and ecosystem layers.
Lifecycle threat comparison
Compare threat vectors across design, development, data collection, training, validation, deployment, monitoring, maintenance, refinement, and decommissioning stages. This table should reflect AI lifecycle thinking and may be aligned to CAIMOM stages.
Governance and assurance comparison
Compare threats in terms of auditability, accountability, compliance implications, detectability, and control requirements.
Baseline vs proposed taxonomy comparison
Compare Huwyler’s taxonomy and your proposed taxonomy in terms of scope, granularity, clarity, applicability, and completeness.
CAIMS/CAIMOM alignment comparison
Compare how threat vectors manifest under CAIMS as a governance architecture and under CAIMOM as an operating model.
Bowtie risk logic comparison
Compare selected threat vectors by mapping threat source, top event, preventative controls, consequence pathways, and recovery controls.
Third-party dependency comparison
Compare risks linked to external data providers, model providers, infrastructure vendors, cloud services, and downstream ecosystem dependencies.
Each table must support your analytical argument rather than function as a decorative summary.
2.4. Analysis of Hernan Huwyler’s AI System Threat Vector Taxonomy (design you must include)
This section must provide a focused and academically grounded analysis of the baseline taxonomy, and it must be specific enough to show how your own work builds on it.
What this section must include
2.4.1. Conceptual overview
Provide a concise explanation of the structure and intent of Hernan Huwyler’s AI System Threat Vector Taxonomy. Explain what it attempts to classify, how its logic appears to be organised, and why it is important as a starting point.
2.4.2. Strengths
Identify and explain the conceptual strengths of the taxonomy. For example, you may discuss:
clarity of categorisation,
system relevance,
usefulness for governance or assurance,
practical interpretability,
or its value in structuring AI threat discussions.
2.4.3. Limitations and gaps
Critically assess the limitations of the taxonomy. For example, you may discuss:
omitted threat areas,
insufficient granularity,
unclear category boundaries,
weak treatment of governance or human-factor risks,
limited lifecycle coverage,
limited incorporation of third-party or ecosystem risks,
weak operational alignment with CAIMS or CAIMOM,
or insufficient treatment of Bowtie-style causal barrier analysis.
2.4.4. Transitional justification
Explain clearly why a novel taxonomy is needed and how your work extends, restructures, or strengthens the baseline taxonomy.
This section must not merely summarise Huwyler’s work. It must function as the analytical bridge between the baseline and your own proposed taxonomy.
2.5. Proposed Novel AI System Threat Vector Taxonomy (what you must present)
This section is the core contribution of the paper. It must present your original taxonomy clearly and systematically.
2.5.1. Taxonomy design requirements
You must:
define the taxonomy explicitly;
explain its categories and sub-categories;
justify the underlying classification logic;
and show how the structure improves upon existing approaches.
2.5.2. Mandatory design expectation
Your taxonomy must not be one-dimensional. It must be articulated through multiple perspectives, and these must include:
CAIMS
CAIMOM
AI lifecycle
Bowtie
Third-party / supply-chain
plus at least three additional perspectives
2.5.3. Visual representation
You must include at least one original visual or structured representation, such as:
a taxonomy tree,
a layered matrix,
a multi-dimensional classification table,
a hierarchical model,
a cross-mapping diagram,
a lifecycle-stage threat matrix,
or a Bowtie-based threat visual.
2.5.4. Minimum expectations
Your taxonomy should demonstrate:
conceptual originality,
logical coherence,
sufficient granularity,
practical usability,
and relevance to both technical and governance-oriented AI risk analysis.
You should ensure that the taxonomy captures not only technical attack vectors, but also operational, human, organisational, governance, third-party, and ecosystem-related threat vectors where justified.
You are strongly encouraged to show how one threat may appear differently when viewed through:
a governance architecture lens,
an operating model lens,
a lifecycle lens,
and a Bowtie causal-barrier lens.
2.6. Comparative Analysis, Results and Evaluation (what you must report)
Your paper must report results clearly, and your claims must be supported by structured analysis.
2.6.1. Minimum requirements
Include a comparative evaluation of your proposed taxonomy against Hernan Huwyler’s taxonomy and, where relevant, against other recognised AI threat or risk frameworks.
Include at least two evaluation dimensions, such as:
scope,
granularity,
interpretability,
governance applicability,
auditability relevance,
lifecycle coverage,
third-party coverage,
Bowtie applicability,
or usefulness for real-world AI assurance.
2.6.2. Findings
You must write 3–6 analytical findings, and each finding must be clearly supported by evidence from:
your literature review,
your comparison tables,
your structured taxonomy analysis,
and your CAIMS / CAIMOM / lifecycle / Bowtie articulation.
The purpose of this section is not quantitative experimentation. Rather, it is to demonstrate that your proposed taxonomy is analytically stronger, more complete, more current, more operationally useful, or more governance-ready than the baseline or comparator frameworks.
2.7. Discussion
Explain what worked well in your taxonomy design and what remained difficult or uncertain. Link your interpretation back to the literature, and discuss the broader implications of your taxonomy for AI governance, assurance, auditing, risk management, and system design.
You must also discuss limitations and acknowledge threats to validity, such as:
subjectivity in category construction,
possible literature selection bias,
overlap between threat categories,
the evolving nature of AI systems,
lifecycle ambiguity across complex enterprises,
limitations of Bowtie when applied to highly adaptive systems,
incomplete visibility into third-party dependencies,
limited validation through real-world deployment cases,
or the challenge of balancing simplicity and comprehensiveness.
This section should show academic maturity and critical self-evaluation.
2.8. Conclusion and Future Work
Summarise what your study established, and restate the value of the proposed taxonomy. You should explain what was learned from analysing Huwyler’s taxonomy and why the proposed taxonomy matters for the future of AI system governance and threat analysis.
Future work may include:
empirical validation in real organisational settings,
refinement for sector-specific AI systems,
mapping the taxonomy to CAIMS and CAIMOM implementation practices,
extending Bowtie-based AI risk visualisation,
strengthening third-party risk treatment,
mapping the taxonomy to control frameworks or standards,
extending it for generative AI and agentic AI ecosystems,
or integrating it with AI audit and assurance practices.
2.9. Appendices
Appendices should include, where relevant:
the full version of your proposed taxonomy,
additional comparison tables,
taxonomy diagrams,
Bowtie visualisations,
lifecycle cross-mappings,
and any supplementary figures used to explain category logic.
2.10. References
Use Zotero for reference management.
Use APA 7th edition style for citations and the reference list.
Deliverable 2 — Homework 07&08 - Discussion Paper (DOCX)
You will submit one DOCX document for this homework.
Discussion Paper Title
Application of “A Novel AI System Threat Vector Taxonomy” on “OpenClaw” and “NemoClaw”
Front Matter
Title Page
Your title page must include:
Course name
Homework title
Student name and surname
Student number
Department / programme
Instructor name
Submission date
Abstract
Write an abstract of 150–250 words summarising:
the core focus of the discussion paper;
the analytical approach used;
the main threat vectors identified in OpenClaw and NemoClaw;
and the principal findings of your comparative discussion.
Table of Contents
The Table of Contents must be placed after the abstract and before the introduction.
Introduction
Theoretical Framework or Methodology
Body: Detailed Analysis
Discussion
Conclusion
Back Matter
3.1. Introduction
3.1.1. Problem Statement or Context
This discussion paper focuses on the application of “A Novel AI System Threat Vector Taxonomy” to two agentic AI environments: OpenClaw and NemoClaw. Recent public reporting describes OpenClaw as an open-source AI agent platform, while NemoClaw is positioned as NVIDIA’s security- and policy-oriented layer for running OpenClaw more safely. (NVIDIA)
In this section, explain:
what OpenClaw is at a high level;
what NemoClaw is at a high level;
why they are relevant examples for AI-GRC analysis;
and why applying a threat vector taxonomy to such systems is meaningful.
The introduction should frame both systems as AI systems, not merely as isolated models. Therefore, the analysis should consider components such as inputs, outputs, processes, feedback, operational controls, runtime environments, human interaction, and governance dependencies, which aligns with the CAIMOM system logic presented in the course materials.
3.1.2. Significance of the Study
Explain why this study matters. You should discuss why agentic AI systems create a broader and more dynamic threat surface than static AI tools. Public sources indicate that OpenClaw-style agents are associated with concerns such as system access, persistent memory, autonomous action, and operational misuse, while NemoClaw is explicitly presented as an attempt to introduce privacy, sandboxing, and policy-based guardrails. (Seeking Alpha)
You should show why this makes the pair especially suitable for applying:
AI system threat taxonomy thinking,
CAIMS and CAIMOM perspectives,
AI lifecycle thinking,
and Bowtie logic for analysing causes, main events, consequences, and barriers.
3.1.3. Thesis Statement
State one clear analytical claim. For example, your paper may argue that:
Although NemoClaw introduces stronger control, sandboxing, and policy-based security mechanisms than OpenClaw, both systems still present significant AI system threat vectors across lifecycle, governance, third-party, human, and operational dimensions; therefore, a multi-perspective AI system threat taxonomy is necessary to analyse them adequately.
You may refine this claim, but your thesis must remain analytical and arguable.
3.2. Theoretical Framework or Methodology
3.2.1. Analytical Approach
You must use “A Novel AI System Threat Vector Taxonomy” as the principal analytical lens of the paper.
Your discussion must apply the taxonomy through multiple perspectives, and these must include the following:
CAIMS perspective
CAIMOM perspective
AI lifecycle perspective
Bowtie perspective
In addition, you should use several further perspectives where relevant, such as:
third-party / supply-chain perspective
human and organisational actor perspective
data provenance and lineage perspective
governance and accountability perspective
infrastructure and platform perspective
Bowtie should be used to analyse selected threat scenarios by identifying:
threat sources,
the main event,
preventative barriers,
consequences,
and recovery or mitigation barriers.
This requirement is fully consistent with the course materials, which present Bowtie as a structured method for identifying a main event, tracing pre-event causes, and mapping consequence pathways.
You may organise the paper as a comparative case discussion paper.
3.2.2. Data Sources / Materials
You should base your discussion on:
your previously developed “A Novel AI System Threat Vector Taxonomy”;
course notes and course textbook material;
official or primary materials relating to OpenClaw and NemoClaw where available;
and credible secondary sources discussing their characteristics, risks, and controls.
Your analysis should take into account that NemoClaw is publicly described as a stack for running OpenClaw more safely through policy-based privacy and security guardrails, sandboxing, and controlled runtime behaviour. (NVIDIA)
You should also recognise that course materials emphasise:
CAIMOM as an operating model and system logic,
guardrails as external control layers around core AI models,
and third-party risk as a critical governance concern.
3.3. Body: Detailed Analysis
This section should be organised by themes or analytical issues, not by chronology.
3.3.1. Major Component A — Application of the Taxonomy to OpenClaw
Discuss OpenClaw by applying your taxonomy in a structured way.
Your analysis should examine threat vectors such as:
prompt injection and manipulation;
unauthorised action execution;
unsafe file or system operations;
privacy leakage;
persistent memory risk;
human-overtrust and misuse;
third-party extensions or plugin risks;
runtime and infrastructure vulnerabilities;
governance and accountability gaps;
and monitoring or auditability limitations.
You should explain how these threats appear when OpenClaw is viewed through:
a CAIMOM systems perspective,
an AI lifecycle perspective,
a third-party dependency perspective,
and a Bowtie perspective.
3.3.2. Major Component B — Application of the Taxonomy to NemoClaw
Discuss NemoClaw using the same taxonomy, but focus on how the system attempts to modify or reduce the threat surface.
Your analysis should examine:
policy-based controls,
sandboxing,
privacy mechanisms,
runtime restrictions,
inference routing,
guardrails,
and any governance implications of relying on an external control stack.
Public descriptions indicate that NemoClaw is designed to add policy-based privacy and security guardrails to OpenClaw and to provide a more controlled runtime environment. (NVIDIA)
At the same time, you should critically analyse whether NemoClaw:
truly removes risks,
merely relocates risks,
introduces additional third-party dependency risks,
or creates new control and assurance questions.
3.3.3. Synthesis of Findings
Bring both analyses together and explain how they support your thesis.
This subsection must compare OpenClaw and NemoClaw across the perspectives used. You should discuss:
which threat vectors remain common to both;
which threats are reduced by NemoClaw;
which threats become more governance-oriented rather than purely technical;
and which risks continue across the full AI lifecycle.
You should show clearly that AI system risk is not limited to the model itself. Rather, it also arises through inputs, processes, outputs, feedback loops, third parties, controls, and operational embedding, which is fully compatible with the CAIMOM view of systems presented in the course.
3.4. Discussion
3.4.1. Implications of the Analysis
Explain the practical and conceptual implications of your analysis.
Your discussion should address implications for:
AI governance,
AI assurance,
AI security,
operational resilience,
third-party oversight,
runtime control,
and organisational accountability.
You should also discuss the relevance of external guardrails and defence-in-depth controls. The course materials explicitly describe guardrails as deterministic or lightweight control layers placed around the core model in order to constrain unsafe inputs and outputs.
Where appropriate, explain how Bowtie thinking helps move from abstract threat lists to a more operational analysis of:
causes,
barriers,
and consequences.
3.4.2. Limitations
Acknowledge the limitations of your discussion paper.
These may include:
limited access to internal technical documentation of OpenClaw or NemoClaw;
reliance on public descriptions and secondary reporting;
the evolving nature of agentic AI systems;
uncertainty regarding deployment-specific configurations;
and the fact that a taxonomy-based discussion does not by itself prove real-world exploitability.
You should be transparent and analytical in this section.
3.5. Conclusion
3.5.1. Restatement of Thesis and Key Findings
Restate your thesis clearly and summarise the principal findings of the paper.
You should explain, in a concise form:
what the taxonomy revealed about OpenClaw;
what the taxonomy revealed about NemoClaw;
where the most significant threat vectors remain;
and why a multi-perspective AI system threat taxonomy is useful for analysing agentic AI systems.
3.5.2. Recommendations or Future Research Directions
Conclude with practical recommendations or future research directions.
These may include:
stronger human-in-the-loop controls;
stricter runtime sandboxing;
better third-party governance;
stronger auditability and logging;
lifecycle-based control mapping;
richer Bowtie analyses for high-impact threat scenarios;
or future validation of the taxonomy on other agentic AI systems.
3.5.3. Back Matter
References / Bibliography
You must include a full reference list of all sources cited in the paper.
Use a consistent academic referencing style throughout the document.
Appendices
Appendices may include:
comparison tables,
Bowtie diagrams,
lifecycle mapping tables,
CAIMS / CAIMOM cross-mapping figures,
or supplementary threat categorisation material.
Additional Instructor Notes
This homework is a discussion paper, not a full empirical research article. Therefore, the emphasis should be on:
structured analysis,
conceptual clarity,
correct application of the taxonomy,
and critical comparison between OpenClaw and NemoClaw.
A strong submission will not merely describe the two systems. Instead, it will apply the taxonomy analytically, and it will show how different perspectives such as CAIMS, CAIMOM, AI lifecycle, Bowtie, and third-party dependency help reveal different parts of the AI system threat surface.
Submission rules, marking, integrity, and writing standards
4.1. Submission format and file requirements
You must prepare both deliverables in DOCX format.
You must use the provided IEEE Access (April 2024) Word template for both papers.
You must not change the template’s core formatting, including columns, built-in heading styles, fonts, spacing, and general page layout.
You may add appendices, figures, tables, and structured subheadings where needed, provided that the final document remains consistent, readable, and academically appropriate.
The final submission must be properly proofread and complete.
4.2. Submission package and naming
Submit your files via the KHAS Learn System.
You must submit the following files:
Deliverable 1: AS_04-HW05&06_<StudentName><StudentSurname>.docx
Deliverable 2: AS_04-HW07&08<StudentName>_<StudentSurname>.docx
If you used AI tools through a chat interface or API, you must also submit:
Deliverable 3 (AI Prompts Log):
AS_04-AI-Prompts_<StudentName>_<StudentSurname>.txt or .docx
Use your own name and surname consistently across all filenames. Do not submit compressed folders unless explicitly requested.
4.3. Homework marking rubric (100 points)
This assignment will be marked as a single integrated assignment worth 100 points in total.
A. Deliverable 1 — Academic Paper (55 points)
Literature review depth, relevance, and synthesis — 12
Methodology clarity and reproducibility — 8
Quality of baseline taxonomy analysis — 7
Originality, coherence, and completeness of the proposed taxonomy — 12
Comparison tables and comparative evaluation quality — 8
Discussion, conclusion, and future work — 8
B. Deliverable 2 — Discussion Paper (35 points)
Correct and consistent application of the taxonomy to OpenClaw — 10
Correct and consistent application of the taxonomy to NemoClaw — 10
Quality of comparative synthesis and analytical argument — 8
Discussion, implications, limitations, and recommendations — 7
C. Presentation, integrity, and academic writing quality (10 points)
Structure, formatting, language quality, citation discipline, and submission completeness
4.4. Academic integrity and authorship responsibility
Plagiarism is taking and using another person’s ideas, writing, images, data, tables, or other work as your own without proper acknowledgement. It is a serious academic offence. Any confirmed case of plagiarism, fabrication, falsification, ghost-writing, unauthorised collaboration, or deceptive AI use may result in an immediate failing grade for the assignment and may also lead to further disciplinary action under university rules.
You are responsible for ensuring that:
all sources are cited properly;
direct quotations are clearly marked;
paraphrased ideas are referenced appropriately;
tables, figures, and diagrams adapted from other works are acknowledged correctly;
and all submitted work genuinely reflects your own understanding and final judgement.
If you are uncertain about whether a practice is acceptable, you must ask before submitting.
4.5. Use of AI tools and mandatory declaration
You may use AI tools to support your work, but you must declare their use clearly, specifically, and honestly.
4.5.1. What must be declared in the paper
In the Appendix, or in a short AI Usage Statement section, you must state:
whether you used any AI tools;
which tool or tools you used, such as ChatGPT, Claude, Copilot, Gemini, or another system;
for which purpose or purposes you used them, such as brainstorming, outlining, proofreading, language improvement, source discovery support, comparison structuring, or rewriting assistance;
which parts of the work were influenced;
and a short note confirming that you verified the accuracy of the content and that you remain fully responsible for the final submission.
4.5.2. Additional deliverable for chat or API use
If you used AI through a chat interface or API, you must submit a third file containing a concise prompts log.
This file must include:
the prompt or prompts used, including important follow-up prompts where relevant;
the tool name;
the date of use;
and a short label describing the purpose of each prompt.
If no AI tool was used, you should still include a brief statement in the paper such as: “No AI tools were used for this submission.”
4.6. Writing, references, and overall document quality
Write in formal British English.
Use clear academic structure, coherent paragraphing, and consistent terminology across both deliverables.
Ensure that arguments are analytical rather than merely descriptive.
Support claims with references or explicit reasoning.
Follow the required section and subsection headings unless a justified methodological reason is provided.
Ensure that the discussion paper clearly builds upon the taxonomy developed in the academic paper.
References must be managed and inserted by using Zotero.
Citations and the reference list must follow APA 7th edition style (the American Psychological Association(APA), Seventh Edition: http://www.apastyle.org/manual/).
Late, incomplete, incorrectly named, corrupted, or unreadable submissions may be penalised according to course polic" and please give me the answer, not like just writting done but no file , take time , make analyse and give me full file
Created the full Deliverable 2 DOCX based on your uploaded Deliverable 1 taxonomy baseline (Assignment05_paper_1.docx) and verified OpenClaw/NemoClaw details against primary OpenClaw GitHub and NVIDIA NemoClaw/OpenShell sources. (GitHub)
Files:
[Download Deliverable 2 DOCX]
A few fields are left as placeholders because they were not provided: student number, department/programme, and instructor name.