1. AI Data Governance
EU AI Act Compliance (Aug 2026)
In alignment with the EU AI Act (Regulation (EU) 2024/1689), we categorize our AI components (including our Guided Discovery Agent) as "Limited Risk" systems.
- AI Disclosure: We explicitly signal all AI-generated interactions and content.
- Human-in-the-Loop: High-risk career advice generated by our systems is subject to algorithmic oversight and verification against verified industry telemetry.
- Data Provenance: We only training our internal ranking models on anonymized, enterprise-standard dataset "Clean Source" protocols.
3. US: California Privacy (CCPA/CPRA)
Do Not Sell My Personal Information
Under the CCPA, California residents have the right to opt-out of the "Sale" or "Sharing" of their personal information. Top AI Courses does not sell your data for monetary compensation. However, our use of targeting cookies may constitute "sharing" under California law.
Exercise your Opt-Out Rights:
| Category | Protocol | Retention |
|---|---|---|
| Identity Data | One-way Hashing (SHA-256) | 30 Days |
| Behavioral Signals | Differential Privacy Layers | Session Only |
| Geographic Telemetry | City-Level Anonymization | 24 Hours |
4. Withdrawal & Erasure
The "Stop Button" Procedure
In compliance with the June 2026 EU E-commerce Directive, we facilitate an immediate and simplified "Right of Withdrawal" for all users. You may terminate data collection or request absolute erasure via our unified Compliance Dashboard.
Legal Right to Correction
If our AI models generate erroneous data regarding your profile or career path, you have the statutory right to request an immediate manual correction by our Human-in-the-Loop task force.
5. Google Consent Mode v2
Technical Data Sovereignty
We utilize Google Consent Mode v2 to bridge the gap between user privacy and analytical utility. When you "Reject All," our architecture automatically adjusts the behavior of Google tags so they do not read or write cookies for advertising or analytics purposes. Instead, we use conversion modeling—which uses non-identifiable signals—to maintain site performance without compromising your identity.
6. AI Anonymization Protocol
Differential Privacy Standards
To power our 2026 ROI simulations, we utilize Differential Privacy. This means we inject "mathematical noise" into our datasets. Even if our AI models are prompted with extreme queries, it is mathematically impossible to reverse-engineer any individual user's data from the aggregate output.
7. Machine Unlearning Protocol
Right to be Forgotten (AI Models)
Unique to 2026 standards, we provide a Machine Unlearning Protocol. If your anonymized interaction data was used to weight our local course-ranking models, you may request "Unlearning." We will execute a localized re-training session that purges the mathematical weights associated with your historical session signatures.
Executive Contact
Submit all formal GDPR/CCPA requests to our Appointed Data Protection Officer.
Initial Acknowledgment: 48 Hours
Substantive Resolution: 30 Days