Where Business Technologists Should Not Use AI
Introduction
Business technologists face growing pressure to “put AI everywhere” in the enterprise, but the more important strategic question is where AI should not be used, or should only be used under very tight constraints. This is especially true in high‑stakes environments shaped by the EU AI Act, GDPR, sectoral regulation and evolving cybersecurity and governance standards. What follows is a deep, pragmatic exploration of those “no‑go” or “not‑yet” zones, aimed at business technologists responsible for enterprise systems. It assumes you are already familiar with AI’s potential; the focus here is on boundaries.
1. Prohibited and High‑Risk Uses
The first and clearest places to avoid AI are where regulators have either banned certain practices outright or made them presumptively high risk with onerous obligations. Business technologists who ignore these boundaries transfer innovation risk directly into compliance, litigation, and reputational risk.
1.1 Uses Explicitly Prohibited by Regulation
The EU AI Act establishes a category of “unacceptable‑risk” systems that are banned in the EU market. For global enterprises, building these into core platforms creates fragmentation and legal exposure.
The Act’s Article 5 prohibits several practices:
-
AI systems that manipulate people’s behavior in ways likely to cause significant harm, for example exploiting vulnerabilities of children or people with disabilities.
-
AI systems that perform social scoring of individuals by public authorities, evaluating or classifying trustworthiness based on social behaviour or personal traits, with detrimental or disproportionate effects
-
AI used to assess or predict individual criminal risk solely based on profiling or personality traits, rather than objective evidence.
-
Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.
-
Real‑time remote biometric identification in public spaces for law enforcement, subject only to narrow exceptions.
If your enterprise systems architect for these capabilities (i.e. centralized social scoring of customers or employees, exploitative behavioral manipulation, extensive biometric scraping) you are designing against the grain of emerging law and human rights norms. Even if deployments start outside the EU, they can later block market access or create regulatory conflict when systems are reused or data is shared across regions. A practical example is “employee trust scores” combining monitoring data, email sentiment analysis, and badge swipes to rank staff. In EU terms this easily drifts into social scoring and intrusive surveillance, and the AI Act plus GDPR make such systems extremely difficult to justify.
1.2 Fully Automated Decisions With Legal or Significant Effects
GDPR Article 22 gives individuals “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them.”
That language captures common enterprise scenarios:
-
Hiring and promotion decisions made entirely by an AI ranking system.
-
Credit approvals and pricing decided automatically by scoring algorithms.
-
Automated denial of essential services (utilities, telecommunications, basic banking) based on risk models.
Article 22 allows narrow exceptions, but even then controllers must provide safeguards such as the right to human intervention, the ability to express a view, and the right to contest the decision. Many AI‑first enterprise designs – “no human in the loop anywhere” – are therefore structurally incompatible with European data‑protection principles when outcomes are high‑impact.
Business technologists should avoid designing systems where
-
There is no practical human pathway to review and override automated decisions in employment, credit, insurance, healthcare access, education, or similar high‑impact domains.
-
Auditability is impossible because the model is opaque and no trace of key features, data lineage, or model versions is retained
1.3 High‑Risk Systems That You Should Not “Casually Automate”
The EU AI Act defines “high‑risk” systems and imposes obligations including risk management, quality management, technical documentation, logging, human oversight mechanisms, accuracy and robustness requirements and registration in an EU database. Annex III lists areas such as:
-
Critical infrastructure management.
-
Education and vocational training (admissions, grading, exam proctoring).
-
Employment, worker management, and access to self‑employment (recruiting, promotion, task allocation, performance evaluation, termination).
-
Essential private and public services (credit scoring, benefits eligibility)
-
Law enforcement, migration, justice, and biometric identification.
Nothing in the Act bans AI in these areas, but the burden of proof moves onto the deployer. You must demonstrate governance, explainability where appropriate and continuous monitoring. For many enterprises, that bar is operationally and culturally too high, at least in the near term.
In practice, business technologists should be wary of deploying opaque models in these domains when:
- They cannot provide traceability from input data through model logic to final decision, in forms understandable to regulators, auditors, and affected individuals.
- They lack the organizational maturity to operate an AI management system aligned with ISO/IEC 42001 or equivalent standards
- They cannot demonstrate systematic risk management in line with NIST’s AI RMF and ISO/IEC 23894 (covering legality, transparency, accountability, traceability, robustness)
In those cases, the safer option is to either defer high‑risk AI use or constrain AI to advisory roles where human decision‑makers retain genuine control
2. Human Rights and Fairness
The second “do not use” cluster includes contexts where AI decisions can entrench discrimination or erode fundamental rights, even if technically legal. Business technologists should be particularly cautious in domains involving identity and power.
2.1 Hiring, Promotion and People Analytics
Regulators increasingly scrutinize AI in employment because historical data encode structural bias and models can scale that bias across entire workforces. The US EEOC has issued guidance warning employers that using AI tools for selection and evaluation does not absolve them from responsibility for discrimination under Title VII. In Europe, the AI Act classifies AI used for recruitment and worker management as high risk, requiring risk management and human oversight.Research on credit scoring illustrates the issue. A 2025 review of financial algorithms found that women systematically received lower credit scores than men, with measurable economic harm. Models using large language models to evaluate loan applications recommended higher interest rates or denials for Black applicants while approving identical white applicants. The same structural bias mechanisms easily propagate into HR systems. Historical promotion patterns, performance evaluations and attrition data all embed inequality. Naive models reproduce it.
Historical promotion patterns, performance evaluations and attrition data all embed inequality
Business technologists should avoid fully automating hiring, promotion, evaluation and termination decisions, especially with proprietary black‑box tools whose training data and features cannot be inspected. They should also avoid deploying AI‑driven personality profiling or “cultural fit” scoring that infers traits from video, voice or writing style. Such tools are notoriously prone to bias and lack scientific grounding.
Even AI used only for screening CVs or ranking candidates must be subjected to disparate‑impact analysis and human review. Where organizations lack that capability, it is safer not to adopt such systems.
2.2 Worker Monitoring and Productivity Scoring
Digital monitoring technologies can track keystrokes, application usage, location, voice and even facial expressions. When AI turns those data into productivity scores or “risk profiles,” enterprises can easily cross into intrusive surveillance. Eurofound’s analysis of employee monitoring notes that the more monitoring resembles continuous, detailed surveillance, the higher the risk of infringing privacy and data‑protection rights and the harder it is to comply with GDPR’s principles of data minimisation and transparency. The EU AI Act will add another layer of scrutiny to AI‑based worker management, including self‑assessments and oversight mechanisms for high‑risk applications.
Business technologists should avoid:
-
AI that continuously rates employees or flags “problematic” behavior without clear, transparent criteria and substantial human review.
-
Using AI outputs as primary evidence in disciplinary processes without robust validation and a fair chance for employees to contest findings.
At minimum, AI in this domain should be limited to aggregated, anonymized analytics used to improve processes rather than to discipline individuals
2.3 Social Scoring, Behavioural Manipulation and Vulnerable Users
The EU AI Act’s ban on social scoring and exploitative manipulation reflects a broader set of human‑rights concerns. AI systems that combine broad behavioural data to rank citizens or customers can damage dignity, freedom of expression and equal access to services.
Examples include:
-
Customer “desirability” scores that determine service quality or pricing beyond objective risk metrics.
-
Systems that personalize content or offers with the specific aim of exploiting addiction, financial vulnerability, or lack of digital literacy
The US Federal Trade Commission has warned that manipulative uses of generative AI (i.e. steering people into harmful financial, health, education, housing, or employment decisions) may constitute unfair or deceptive practices. Given this regulatory direction in both EU and US contexts, business technologists should not design AI features whose business logic depends on exploiting cognitive or situational vulnerabilities.
3. Safety‑Critical and Regulated Domains
The third category covers domains where incorrect AI outputs can cause physical harm or major systemic risk. Here, the default stance should be extreme conservatism. Do not rely on AI beyond its proven capability and regulatory approval.
3.1 Healthcare and Clinical Decision Support
Where AI is used, it should act as a decision‑support tool with clear separation between suggestion and final clinical decision.
Healthcare presents an instructive case of both promise and peril. A widely reported 2025 incident in the UK involved an AI tool used to summarize patient records; it generated a false diagnosis of diabetes and suspected heart disease in a patient’s file, leading to an inappropriate invitation to diabetic screening. The AI had fabricated details, including a non‑existent hospital address; a human saw the error but inadvertently saved the wrong version and the erroneous data entered the record.
This episode encapsulates several reasons not to use AI in certain ways. Large language models are prone to hallucinations – plausible but false statements – especially when summarising or synthesising complex data. In regulated sectors like healthcare, hallucinations can trigger misdiagnosis and serious harm. Automation bias leads humans to over‑trust AI recommendations, even when they conflict with other available information. Regulators such as the European Medicines Agency have emphasised that LLMs in medicines regulation must be used with explicit governance, staff training and careful control over input data. Many national regulators treat AI tools as medical devices when used for diagnosis or treatment planning, subjecting them to strict approvals.
Business technologists working with healthcare or life‑sciences systems should therefore avoid
- Allowing general‑purpose LLMs to write directly into patient records or order sets without mandatory human review prior to saving.
- Using non‑approved AI models for diagnosis, triage, or treatment decisions in production clinical workflows.
- Training models on patient data without clear legal basis and alignment with health‑data regulations, which typically demand much higher safeguards than generic enterprise data
Where AI is used, it should act as a decision‑support tool with clear separation between suggestion and final clinical decision.
3.2 Critical Infrastructure and Industrial Control
The AI Act classifies AI used as safety components in products covered by sectoral safety law – such as aviation, motor vehicles, medical devices, lifts – as high risk. In energy, organisations like NERC emphasize both the potential and the need for careful governance when using AI in grid reliability and compliance monitoring.
These contexts are intolerant of unanticipated failure modes, adversarial manipulation, or opaque reasoning. Large language models and other data‑driven systems have known fragilities: sensitivity to data drift and difficulty in providing formal guarantees of behaviour.
Business technologists should not:
1. Put unvalidated AI in closed‑loop control over industrial systems where failure can cause physical harm, environmental damage or large‑scale outages.
2. Use general‑purpose LLMs to generate or modify control logic, configuration scripts, or protection settings without rigorous independent safety engineering review.
3. Expose critical infrastructure control networks to internet‑facing AI services or agentic systems with the ability to call external tools without strict isolation and fail‑safes
Guidance from national cybersecurity centres stresses that AI system security is a precondition for safety. Models and data must be protected from poisoning, tampering, and misuse
3.3 Financial Services, Credit, and Essential Services
AI in finance can improve fraud detection and risk modelling, but it also carries systemic fairness and stability risks. The European Banking Authority has mapped obligations under the AI Act against existing banking and payments regulations, underscoring that many uses of AI in credit scoring, trading, and risk management will be high risk and subject to strict requirements.Discriminatory credit scoring models demonstrate why naive deployment is unacceptable. The 2025 bias review showed not only gender disparities in scores, but also racist recommendations by LLM‑based loan evaluation systems, which suggested higher interest rates or rejections for Black applicants compared with identical white applicants. Such behaviour breaches anti‑discrimination laws and undermines trust in financial institutions.
Business technologists should avoid:
-
Fully automated credit decisions based on black‑box models without robust, documented processes for detecting and correcting bias
-
Using opaque AI systems to make eligibility decisions for essential services where customers have limited ability to contest or understand outcomes
-
Allowing AI agents to execute financial transactions or reconfigure trading systems autonomously without segregation of duties, limits and human approvals.
In these domains, AI should be tightly governed, explainable where required, and embedded in a broader risk‑management framework such as NIST’s AI RMF and ISO/IEC 23894
4. Security and Confidentiality
Security and confidentiality risks create another major class of “do not use AI” scenarios. The combination of powerful models, network connectivity and sensitive data can undermine core security controls if not handled with discipline.
4.1 Exposing Sensitive Data to External Models
Several high‑profile incidents have shown employees pasting proprietary or regulated data into public AI tools. The OWASP Top 10 for LLM applications explicitly warns about sensitive information disclosure, including leakage of personal data, trade secrets, and proprietary algorithms through model outputs or inversion attacks. The “Samsung leak” incident, where confidential code became part of model training data, is now a stock example in security guidance. National cybersecurity agencies stress that GenAI access should be restricted by default. Ireland’s National Cyber Security Centre, for example, recommends that public sector bodies only allow GenAI use through exceptions based on approved business cases, and that providers’ security practices be scrutinised carefully. Shadow AI – the unsanctioned use of external AI tools by employees—has become a recognised risk vector. Until you have a robust AI governance framework, sanctioned channels, and possibly self‑hosted or dedicated instances with proper access control, the safest position is to restrict AI exposure of critical data
4.2 Prompt Injection, Model Misuse and Agentic Systems
Do not use AI as an autonomous operator of security‑critical or financial‑critical systems unless you have sophisticated, layered protections and well‑tested fail‑safes
Prompt injection is now a well‑documented class of attacks where adversarial inputs cause LLMs to ignore prior instructions, exfiltrate secrets, or trigger harmful actions. The OWASP LLM Prompt Injection Prevention guidance shows how seemingly benign text (including data pulled from internal systems) can contain instructions that subvert the agent’s policy
When enterprises wire LLMs into other systems (e.g. APIs, RPA tools, document repositories) the risk moves from wrong answers to concrete security incidents, data extraction, configuration changes or fraudulent transactions. Guidance from national cybersecurity bodies emphasises implementing AI‑specific controls i.e. restricting the actions models can take, monitoring query interfaces and enforcing guardrails
Business technologists should not:
-
Give LLM‑based agents direct, unsupervised write access to production systems, admin consoles, or security‑sensitive APIs.
-
Allow models to ingest untrusted external content (emails, web pages, user uploads) and then use that content as instructions for downstream actions without intermediate validation
-
Store secrets, access tokens, or internal prompts in places that model outputs can reveal through injection
In other words, do not use AI as an autonomous operator of security‑critical or financial‑critical systems unless you have sophisticated, layered protections and well‑tested fail‑safes.
4.3 Deepfakes, Fraud, and Identity
Generative AI has dramatically lowered the cost and skill required to produce convincing deepfake audio and video. Real‑world fraud cases have used AI‑generated voices and video of executives to trick staff into authorising large transfers. In one incident, attackers used a deepfake video call to impersonate multiple senior executives; the finance director, believing the call genuine, approved a large payment that was later found to be fraudulent. Another early case involved a cloned CEO voice used to request a wire transfer, which was duly executed. Cybersecurity agencies have started warning about “CEO fraud 2.0,” where deepfakes augment or replace traditional business‑email compromise. Enterprises that use AI to generate synthetic identities or internal communications without clear markings risk increasing confusion and lowering staff’s ability to detect fraud.
The safer path is to strengthen multi‑factor authentication and train staff to treat unexpected high‑pressure requests as suspicious, regardless of apparent realism.
5. A Practical “Do Not Use” Heuristic
Across these domains, several recurring patterns show where business technologists should either not use AI at all or confine it to carefully bounded, human‑centred roles.
You should not rely on AI as a primary decision‑maker or autonomous actor when:
-
The decision is legally or ethically significant for individuals (employment, credit, healthcare, education, essential services, law enforcement) and you cannot provide meaningful explanation, contestability and human oversight.
-
The use falls into or near banned categories like social scoring, exploitative manipulation, emotion recognition in workplaces, or broad biometric surveillance.
-
The environment is safety‑critical or infrastructure‑critical and you lack robust, formally validated controls and clear segregation between AI recommendations and control actions.
-
The organisation has no coherent AI governance framework or risk‑management process aligned with emerging standards and principles
-
The workflows expose sensitive or regulated data to external models without contractual safeguards, technical controls and user training on what must never be shared
-
The outputs touch IP‑sensitive assets or legal/regulatory communications where copyright or accuracy errors can cause disproportionate harm.
-
The organisational culture encourages uncritical trust in algorithms and lacks mechanisms for humans to override or escalate concerns about AI behaviour.
As AI security and governance guidance from national cybersecurity centres emphasises, secure and responsible AI is not merely a technical property. It is a system of people, processes, and controls. In many enterprises, that system is still in its infancy. Recognising where not to use AI is therefore a strategic capability, not a sign of technological backwardness. The most resilient enterprises will combine targeted, well‑governed AI adoption with explicit “no‑use” zones based on law, ethics, and risk appetite. Business technologists have a central role in drawing those boundaries – before regulators, courts, or incidents draw them on their behalf.
References:
-
AI Act Service Desk – Article 5: Prohibited AI practices. European Commission. https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-5
-
SupplierShield – “What is the EU AI Act? Complete Guide 2025.” https://www.suppliershield.com/post/what-is-the-eu-ai-act-complete-guide-2025
-
GDPR Article 22 – Automated individual decision‑making, including profiling. https://gdpr-text.com/read/article-22/
-
GDPR.eu – “Art. 22 GDPR – Automated individual decision‑making, including profiling.” https://gdpr.eu/article-22-automated-individual-decision-making/
-
GDPR Article 22 Explained – “The right to human decision‑making.” https://gdprinfo.eu/gdpr-article-22-explained-automated-decision-making-profiling-and-your-rights
-
NIST AI Risk Management Framework – overview (Palo Alto Networks). https://www.paloaltonetworks.com/cyberpedia/nist-ai-risk-management-framework
-
ISO/IEC 42001:2023 – AI management system. https://www.iso.org/standard/42001
-
ISO/IEC 23894:2023 – AI risk management standard. https://www.szstr.com/aiq-en/iso23894
-
IBM – “What is AI Governance?” and related governance insights. https://www.ibm.com/think/topics/ai-governance
-
IBM – “Building a robust framework for data and AI governance and security.” https://www.ibm.com/think/insights/foundation-scalable-enterprise-ai
-
Papagiannidis et al. – “Responsible artificial intelligence governance: A review.” https://www.sciencedirect.com/science/article/pii/S0963868724000672
-
SecurePrivacy – “AI Governance: Enterprise Compliance & Risk.” https://secureprivacy.ai/blog/ai-governance
-
eSystems Nordic – “AI Ethics and Governance: Responsible Use.” https://www.esystems.fi/en/blog/ai-ethics-and-governance-responsible-use
-
Real World Data Science – “Understanding and Addressing Algorithmic Bias: a Credit Scoring Case Study.” https://realworlddatascience.net/applied-insights/case-studies/posts/2026/02/11/algorithmic_bias_credit_scoring.html
-
EEOC – “The EEOC Issues New Guidance on Use of Artificial Intelligence in Hiring.” https://www.brickergraydon.com/insights/publications/The-EEOC-Issues-New-Guidance-on-Use-of-Artificial-Intelligence-in-Hiring
-
Eurofound – “Employee monitoring: A moving target for regulation.” https://www.eurofound.europa.eu/en/publications/all/employee-monitoring-moving-target-regulation
-
AI21 – “What are AI Hallucinations? Signs, Risks, & Prevention.” https://www.ai21.com/knowledge/ai-hallucinations/
-
Fortune – “UK health service AI tool generated a set of false diagnoses for a patient.” https://fortune.com/2025/07/20/uk-health-service-ai-tool-false-diagnoses-patient-screening-nhs-anima-health-annie/
-
European Medicines Agency – “Guiding principles on the use of large language models (LLMs).” https://www.biosliceblog.com/2024/09/ai-ema-publishes-guiding-principles-on-the-use-of-large-language-models-llms/
-
BearingPoint – “The AI Act requires human oversight.” https://www.bearingpoint.com/en-us/insights-events/insights/the-ai-act-requires-human-oversight/
-
EthicAI – “Tracking AI incidents: OECD AIM and AIAAIC Repository.” https://ethicai.net/ai-incidents
-
Cranium – “AI Security in 2026: Enterprise Governance, Risks & Best Practices.” https://cranium.ai/resources/blog/ai-safety-and-security-in-2026-the-urgent-need-for-enterprise-cybersecurity-governance/
-
OWASP GenAI – “LLM02:2025 Sensitive Information Disclosure.” https://genai.owasp.org/llmrisk/llm02-insecure-output-handling/
-
OWASP – “LLM Prompt Injection Prevention Cheat Sheet.” https://cheatsheetseries.owasp.org/cheatsheets/LLM_Prompt_Injection_Prevention_Cheat_Sheet.html
-
UK NCSC (and partners) – “Guidelines for secure AI system development.” https://www.ncsc.gov.uk/files/Guidelines-for-secure-AI-system-development.pdf
-
New Zealand NCSC – “Guidelines for secure AI system development.” https://www.ncsc.govt.nz/protect-your-organisation/guidelines-for-secure-ai-system-development/
-
Ireland NCSC – “Cybersecurity guidance on Generative AI for Public Sector Bodies.” https://www.ncsc.gov.ie/pdfs/Cybersecurity_Guidance_on_Generative_AI_for_PSBs.pdf
-
LinkedIn – “Shadow AI Explained: How Your Employees Are Already Using AI in Secret.” https://www.linkedin.com/pulse/shadow-ai-explained-how-your-employees-already-using-secret-hamdan-vcn7f
-
Stafford Rosenbaum – “The High Risk of Intellectual Property Infringement with Use of Generative AI.” https://www.staffordlaw.com/blog/business-law/generative-artificial-intelligence-101-risk-of-intellectual-property-infringement/
-
UK Civil Service – “Using Large Language Models responsibly in the civil service.” https://www.bennettschool.cam.ac.uk/publications/using-llms-responsibly-in-the-civil-service/
-
UK Government CDDO – “The use of generative AI in government.” https://cddo.blog.gov.uk/2023/06/30/the-use-of-generative-ai-in-government/
-
FTC – “FTC Warns Companies about Generative AI.” https://wp.nyu.edu/compliance_enforcement/2023/05/22/ftc-warns-companies-about-generative-ai/
-
NERC / Ampyx Cyber – “Embracing AI for the Electric Grid: Insights from NERC.” https://ampyxcyber.com/blog/embracing-ai-for-the-electric-grid-insights-from-nerc
-
EBA – “Outcome of EBA’s AI Act mapping exercise.” https://www.regulationtomorrow.com/the-netherlands/fintech-the-netherlands/eba-letter-outcome-of-ebas-ai-act-mapping-exercise/
-
AI in the Boardroom – “Breakdown of the OECD’s Principles for Trustworthy AI.” https://www.aiintheboardroom.com/p/breakdown-of-the-oecds-principles
-
Local Government Association – “Large language models and generative AI – policy brief.” https://www.local.gov.uk/our-support/cyber-digital-and-technology/cyber-digital-and-technology-policy-team/large-language
-
Swiss NCSC – “Online meeting with deepfake boss: CEO fraud 2.0.” https://www.ncsc.admin.ch/ncsc/en/home/aktuell/im-fokus/2024/wochenrueckblick_14.html
-
Brside – “Deepfake CEO Fraud: $50M Voice Cloning Threat to CFOs.” https://www.brside.com/blog/deepfake-ceo-fraud-50m-voice-cloning-threat-cfos
-
National Cyber Security Centre (UK) – “ChatGPT and LLM cyber risks” (news analysis). https://www.dataguidance.com/news/uk-ncsc-addresses-chatgpt-and-llm-cyber-risks



Leave a Reply
Want to join the discussion?Feel free to contribute!