Updated April 2, 2026
CyberNut is committed to leveraging artificial intelligence (AI) to enhance cybersecurity awareness training and phishing defense for K-12 school districts, while maintaining the highest standards of data protection and privacy. This document covers our AI-specific practices. For our general data handling, security, and privacy practices, please refer to our Data Security & Privacy Plan.
At CyberNut, protecting the data entrusted to us by schools is a core part of how we build and operate our services. Our security and privacy program is designed around recognized best practices, including the NIST Cybersecurity Framework, and is supported by technical, operational, and contractual safeguards.
1. Robust Data Protection:
We implement strict data governance across our multi-tenant architecture, safeguarding individual customer data integrity and privacy in accordance with our Data Security & Privacy Plan.
2. Leveraging Pre-trained LLMs:
Rather than training large language models from scratch, we utilize existing models pre-trained on general datasets by specialized AI providers. This includes Claude Sonnet, developed by Anthropic, accessed exclusively through Amazon Web Services (AWS) Bedrock. We employ prompt engineering techniques to guide these models in producing our threat scoring and verdicts.
3. Controlled Environment:
All AI model operations occur within our secure AWS environment, ensuring that data and model interactions remain under our direct control. Our AI infrastructure operates within the same AWS regional boundaries as our broader platform.
4. Advisory AI Verdicts:
All AI-generated threat verdicts are advisory by default. Admins retain full authority over any actions taken in response to a verdict. CyberNut also allows admins to configure preset rules that can automatically trigger remediation actions, such as quarantine or deletion, based on AI-generated verdicts. Admins can modify or disable these rules at any time.
5. Continuous Improvement Without Customer Data Training:
We improve our threat scoring engine through human review and prompt refinement. When an AI-generated verdict does not align with the remediation action taken by an admin, a member of our team manually reviews those cases and makes adjustments to our prompts and guardrails based on their findings. Reported threat emails reviewed in this process are already stored in our system in accordance with our data policies. At no point are customer emails used by automated systems to train, fine-tune, or otherwise modify any AI model.
6. Threat Detection:
When an email is reported as a potential threat, it is processed through our AI-powered email risk assessment engine to produce a threat verdict and confidence level. A portion of this analysis is handled through deterministic, rule-based logic; the remainder is analyzed by an AI model via AWS Bedrock.
7. Scoring Engine Validation:
When our threat scoring engine is updated, previously reported threat emails may be re-processed through the updated model solely to validate that improvements are working as expected. This is a blackbox testing process. No customer data is used to train or modify any AI model.
8. Product Improvement:
We collect explicit admin feedback and observe admin actions taken on threat verdicts. This feedback is manually reviewed and used only to refine our guardrails and prompts. It is never used to train the underlying AI model.
9. AWS Bedrock Data Protections:
By accessing AI models through AWS Bedrock, CyberNut ensures that customer data is never stored or logged by the model provider, never used to train third-party models, and never shared with any AI vendor.
• This policy is reviewed and updated on a quarterly basis.
• When new AI-powered features are introduced, CyberNut will notify customers in advance of their availability.
• Questions about this policy or CyberNut's AI data practices can be directed to hello@cybernut.com.