The AI Act's New Line: When Influence Becomes Manipulation and What SaaS Teams Need to Know
AI doesn't just suggest what you should buy under the EU AI Act, some of its influence can now be legally forbidden.
In February 2025, the European Commission published guidance that draws a sharper legal line between permissible influence and prohibited manipulation in AI systems. This is big for SaaS builders: if you use AI to persuade or guide users, you need to understand where the law applies and where it crosses the line.
This article unpacks what the EU means by 'manipulation', why this matters for marketing and AI-powered SaaS, and how product teams can adapt to comply.
1. What the New Guidance from the European Commission Means
The European Commission's 2025 guidance clarifies how Article 5 of the EU AI Act prohibits certain uses of AI that exploit human cognitive biases or vulnerabilities.
While not all 'influence' is banned, the Commission makes a clear distinction:
- Influence, like persuasion using reasoning or relevant information, is generally allowed.
- Manipulation, deception, coercion, and exploitation, especially when targeting vulnerable people or using subliminal techniques can be strictly prohibited.
In short: the AI Act invites innovation, but draws a line at using AI to exploit human vulnerabilities.
2. The Five Forms of Influence (and Why They Matter)
In his research, Dr. Théo Antunes identified five forms of influence that AI systems can use. These help define which practices are acceptable and which may violate the law:
-
Persuasion
- The 'good' kind: AI offers logical arguments, explains pros and cons, and respects user autonomy.
-
Manipulation
- AI leverages cognitive biases or presents 'results' in a skewed or misleading way, pushing users subtly toward a choice.
-
Deception
- Providing false, distorted, or misleading information in a way that seems trustworthy.
-
Coercion
- Exerting psychological pressure so users feel compelled to act in a certain way.
-
Exploitation
- Identifying and targeting personal vulnerabilities (such as age, socioeconomic status, or mental state) to influence decisions.
The guidance makes it clear: only persuasion is broadly permissible, while manipulation, deception, coercion, and exploitation carry legal risk.
3. What 'Manipulation' Actually Means Under the AI Act
So what is 'manipulation' in the eyes of the EU law?
- The AI system exploits cognitive biases or vulnerabilities.
- It presents information in a way that misleads or 'nudges' users into decisions they might not have made otherwise.
- There does not always need to be malicious intent: even an 'unintended' effect can violate rules if the distortion is significant.
In simpler terms, AI can't trick or exploit users just because it is powerful. The law demands that influence stays transparent, respectful, and minimally coercive.
4. Why This Is a Big Deal for AI-Powered Marketing and SaaS
If your SaaS uses AI for ads, personalization, or persuasive messaging, you need to pay attention. Here's why:
A. Behavioral Targeting Gets Riskier
AI that analyzes user behavior and delivers tailored content or ads can drift into manipulation, especially if it:
- Predicts emotional states
- Pushes targeted offers based on vulnerability
- Tries to maximize engagement using subtle behavioral tricks
If that influence 'materially distorts' decision-making, it could fall under prohibited AI practices.
B. Persuasion Optimization Runs a Fine Line
Many marketing platforms use AI to optimize messaging, creative, and placement. But if done poorly, it could:
- Exploit biases (e.g. loss aversion, social pressure)
- Use emotionally loaded language to push users
- Slip into deception by omitting or misrepresenting risks
C. AI-Driven Content Must Be Transparent
Under the AI Act, systems that generate or curate content must sometimes clearly signal:
- That the content is AI-generated
- How the AI arrives at its recommendations
- Whether the system tries to guide or push user behavior in specific directions
D. Vulnerable Populations Are Protected
The guidelines especially prohibit using AI to influence individuals who may be emotionally, cognitively, or socially vulnerable. That means careful design is needed for apps aimed at sensitive groups, for example, children, older adults, or those in financial stress.
5. What the AI Act Actually Prohibits (And What It Allows)
Here are some key legal rules from Article 5 of the AI Act and associated guidance:
- Prohibition of manipulative AI: Systems that 'deploy subliminal or purposefully manipulative or deceptive techniques' are banned.
- Exploitation of vulnerability: AI systems that identify and leverage personal vulnerabilities (e.g. age, disability, economic status) in a way that distorts behavior are prohibited.
- Deceptive AI: Systems that feed false information or misleading content to a person to influence them can break the law.
- Transparency obligations: Providers may need to clearly disclose that content is generated by AI and label it accordingly (per Article 50).
Importantly, not all persuasion is banned. If AI simply offers reasoning or arguments without using 'dark pattern' tricks, it may be allowed.
6. Real-World Risks for SaaS Founders and AI Teams
So what's the real risk for SaaS teams building AI-driven products?
Risk 1: Non-compliance
If your system crosses into 'manipulation,' you may face enforcement, fines, or being forced to change your design. Enforcement could include model evaluations, mitigation measures, and penalties.
Risk 2: Reputation damage
Even if you don't intend to manipulate, users or regulators may see your AI as exploitative if it targets emotions or vulnerabilities.
Risk 3: Product redesign
To comply, you might need to remove or redesign features that rely on persuasive or emotionally targeted AI behaviors.
Risk 4: Legal uncertainty
Because the line is not always bright, your design team will need to think not just about 'can we do this technically' but also 'should we do this legally?'
7. How to Build AI Responsibly Under the New Guidance
If you're building a SaaS product or integrating AI that influences users, here are steps to stay on the right side of the AI Act:
Step A: Map Influence Paths
List all ways your AI impacts users. For each, ask:
- Is this simple persuasion or something more?
- Could it exploit a cognitive bias or vulnerability?
- Could it steer someone into a decision they wouldn't otherwise make?
Understanding how influence works in your product helps you design for compliance.
Step B: Avoid Subliminal or Hidden Techniques
Steer clear of tactics that aim to influence without the user being aware:
- Hidden cues
- Flickering messages
- Auditory trickery
- Personalized emotional manipulation
If a choice is being nudged, make sure the user sees it, understands it, and can opt out.
Step C: Add Clear Disclosures
When AI is influencing user behavior, be transparent:
- Inform users when an AI system is guiding, recommending, or persuading
- Label generated content clearly if required
- Explain your design principles, such as 'this system does not exploit psychological weaknesses'
Transparency mitigates risk and fosters trust.
Step D: Decide on Safe Defaults
If your AI is capable of persuasion:
- Use default settings that limit persuasive intensity
- Let users choose 'neutral' or 'recommendation-only' modes
- Provide an option to disable personalization or behavioral optimization
This gives users control and reduces regulatory risk.
Step E: Perform Ethical and Legal Reviews
Before you deploy or scale, run a cross-functional review involving:
- Product designers
- Engineers
- Legal / compliance leads
- Behavioral experts (if available)
Use the Commission's guidance and frameworks from behavioural economics to evaluate risk.
Step F: Monitor and Audit
After deployment:
- Track how users respond
- Watch for signs of distortion or harm
- Keep audit logs of decisions, model behavior, and user feedback
- Update your AI strategy to remove risky patterns
8. Special Cases: Vulnerable Users and Exploitation
The guidance explicitly calls out AI systems that exploit vulnerable people (based on age, disability, economic status, etc.) as potentially prohibited.
If you build AI for:
- Education (children)
- Mental health
- Finance or debt advice
- Elder care
... then you must be extra careful. You need to assess whether your system could exploit underlying vulnerabilities in a way that manipulates decision-making.
In some cases, what looks like 'helpful recommendation' might legally be classified as 'exploitation' if it pushes decisions too hard or plays on emotional or financial weakness.
9. Exception: Therapeutic Use
Interestingly, there is a potential carve-out: the guidance suggests therapeutic AI might be allowed to use some persuasive or influence techniques but only in a well-defined clinical context.
In other words, AI used in mental health apps could, in theory, persuade or nudge, but such use must be transparent, justifiable, and well regulated. Overuse or vague 'therapeutic purpose' claims may not hold up legally.
10. Why This Guidance Matters for SaaS Founders
- Legal clarity: It helps founders understand which persuasion tools are OK and which are risky.
- Design guardrails: Product teams can build influence-based features more confidently.
- Compliance by design: Teams can bake in transparency and ethical checks early.
- Competitive edge: Being compliant with manipulation rules can become a trust signal.
- Risk management: Prevents regulatory or reputational fallout.
11. What SaaS Teams Should Do Today
- Review your AI features: List every system that influences behavior.
- Evaluate whether they cross into manipulation: Use the five-influence framework.
- Design transparency and consent mechanisms: Label content, inform users, provide opt-outs.
- Limit manipulative potential: Avoid subliminal techniques. Use safe defaults.
- Define your policy: Write an internal 'AI influence policy' mapped to the guidance.
- Audit regularly: Check how your AI is used in the wild. Remove risky patterns.
- Train your team: Make product, engineers, designers, and marketers aware of legal risk.
- Document everything: Use logs and records to show compliance readiness.
12. Why This Is a Trust Opportunity
Compliance does not have to be a burden. For SaaS teams, this new guidance is a chance to build trust:
- Market your AI responsibly
- Make 'ethical by design' part of your brand
- Signal to users and investors that you care about long term autonomy, not just conversion
- Reduce risk by proactively shaping how your AI influences decisions
In a world where AI is everywhere, your approach to influence may become one of your most important differentiators.
Conclusion
The EU's clarification on influence vs. manipulation under the AI Act is not just regulatory noise, it's a meaningful boundary that affects how SaaS companies build, market, and operate AI systems.
If you build AI that influences users, you need to pay attention. Understanding these rules and designing for them isn't just legal safe-keeping: it's a powerful way to gain trust, gain customers, and build more responsible AI.
Take the guidance seriously, embed it into your product processes, and you'll turn a potential risk into a reputation strength.
Ready to Ensure Your Compliance?
Don't wait for violations to shut down your business. Get your comprehensive compliance report in minutes.
Scan Your Website For Free Now