The 6 Dimensions of AI Readiness
The AIQ assesses organisational AI readiness across six strategic dimensions — from corporate strategy to governance. Each dimension produces its own score, showing where your organisation stands today and where the biggest lever for the next step lies.
This page explains what each dimension measures — so you can interpret your report results and take targeted action.
Strategy
The Strategy dimension examines whether AI is strategically embedded in your organisation — not just as an experiment by individual teams, but as an explicit business initiative with resources, responsibilities, and measurable goals.
It measures whether leadership actively drives AI, whether a documented AI vision exists, and whether that vision is linked to business objectives. Organisations with a high Strategy score don't just have AI on the agenda — they know specifically which processes they want to transform, by when, and who is responsible.
Without strategic anchoring, AI initiatives often fizzle out: they start as pilots, deliver isolated results, but never scale. A low score here typically means lots of interest but little commitment at leadership level.
Low Score — typical signs
- —AI is not a topic at board or executive level
- —No documented AI strategy or roadmap
- —AI projects run ad hoc, without a clear sponsor
- —No dedicated budget or resources for AI
High Score — what it looks like
- ✓AI vision is embedded in the mission statement or corporate strategy
- ✓Clear AI goals with timelines and owners
- ✓A senior leader is named as an active AI champion
- ✓AI budget is a dedicated line item in planning
Data
AI is only as good as the data it works with. The Data dimension assesses whether your organisation has the data foundation needed for meaningful AI applications — in terms of availability, quality, accessibility, and governance.
Many organisations already collect substantial volumes of data but cannot make it usable for AI: because it sits in silos, is inconsistently structured, lacks clear ownership, or faces data protection constraints. The score measures data maturity, not data volume.
Without sufficient data quality and accessibility, AI projects don't fail because of the model — they fail at the foundation. This dimension is often the most important early indicator of whether an organisation can deploy AI productively within six months or needs to invest in data infrastructure first.
Low Score — typical signs
- —Data lives in spreadsheets, local drives, or isolated systems
- —No unified definitions for core data (e.g. "customer", "revenue")
- —Nobody is explicitly responsible for data quality
- —Unclear what data exists and where it lives
High Score — what it looks like
- ✓Central, accessible data sources (e.g. data warehouse, CRM, ERP)
- ✓Documented data pipelines with defined quality criteria
- ✓Clear data ownership and governance processes
- ✓Privacy and compliance requirements are integrated into processes
Tech Stack
The Tech Stack dimension evaluates your organisation's technical infrastructure: what systems are in place, how well they are integrated, and to what extent they support AI solutions.
This is not about whether you already use AI-specific tools. It is about whether your existing system landscape — from ERP and CRM to communication and collaboration tools — is modern enough and sufficiently connected to integrate AI applications meaningfully. Legacy systems without APIs, fragmented tool landscapes, or a lack of cloud infrastructure significantly increase integration effort.
A high score does not mean everything needs to be new. It means the infrastructure is open, integrable, and extensible — a necessary prerequisite for AI solutions to become part of workflows rather than ending up as isolated tools.
Low Score — typical signs
- —Core systems with no or very limited APIs
- —No cloud infrastructure; everything on-premise with no migration path
- —Systems barely communicate; manual data transfers dominate
- —IT budgets focused on maintenance, little room for modernisation
High Score — what it looks like
- ✓Modern SaaS systems with open APIs and webhook support
- ✓Cloud-first or hybrid infrastructure in operation
- ✓Systems are integrated; automation platforms (e.g. Make, n8n) in use
- ✓IT stack is actively developed and documented
Processes
The Processes dimension analyses how clear, repeatable, and documented your organisation's workflows are. AI can only automate where a process is sufficiently standardised — undefined or highly variable workflows are difficult to support meaningfully with AI.
High process maturity does not mean rigid bureaucracy. It means your people know how a process works, that it is measurable, and that exceptions are documented — not improvised. Organisations with mature processes can deploy AI in a targeted way: as decision support, as an automation layer, or for quality assurance.
A low score in this dimension is not a failure — it is an indicator of where optimisation work is needed before AI adoption. A poorly defined process that gets automated remains a poor process.
Low Score — typical signs
- —Workflows depend on individual employees' personal knowledge
- —Little or no process documentation exists
- —Similar tasks are handled differently from person to person
- —No defined KPIs or quality metrics for core processes
High Score — what it looks like
- ✓Core processes are documented, consistent, and measurable
- ✓Exceptions and escalation paths are defined
- ✓Initial automation experience exists (e.g. workflows, RPA)
- ✓Process owners know bottlenecks and optimisation potential
People & Skills
Technology alone is not enough. The People & Skills dimension assesses whether your team has the capabilities, knowledge, and willingness to use and develop AI systems productively — across all levels of the organisation.
It starts with basic AI literacy: do employees understand what AI can and cannot do? Do leaders know how to evaluate and steer AI projects? Does the organisation have the technical competence to implement AI solutions or at least procure them effectively? And perhaps most importantly: how high is the readiness for change?
Resistance to AI often stems not from ignorance but from legitimate uncertainty about consequences. Organisations with a high score here have built trust: through transparency, involvement, and genuine skills development rather than mandatory training.
Low Score — typical signs
- —AI is primarily perceived as a threat by the team
- —No systematic training offerings on AI topics
- —Technical AI knowledge concentrated in just a few individuals
- —Change management for new tools is not actively managed
High Score — what it looks like
- ✓Active internal AI upskilling programme running or planned
- ✓Team shows curiosity and initiative with new AI tools
- ✓Change management is part of every new technology rollout
- ✓AI skills distributed across multiple teams and levels
Governance
The Governance dimension assesses whether your organisation has established a structured approach to the risks, obligations, and ethical questions surrounding AI. It is the newest of the six dimensions — and at the same time the one gaining increasing regulatory relevance through the EU AI Act.
Governance spans multiple layers: what policies exist for AI use in the organisation? How is it ensured that AI decisions are traceable and explainable? Who is responsible for AI risks? Are suppliers and external AI services selected according to ethical and data protection criteria?
A low Governance score does not automatically mean mistakes are happening — but it does mean the organisation is poorly prepared for incidents, regulatory enquiries, or compliance audits. With the EU AI Act, organisations will be explicitly accountable in an increasing number of use cases from 2025/26.
Low Score — typical signs
- —No internal policy on the use of AI tools (e.g. ChatGPT, Copilot)
- —Unclear who in the organisation is responsible for AI risks
- —EU AI Act and GDPR implications for AI not actively addressed
- —No documentation of which AI systems are in use
High Score — what it looks like
- ✓AI usage policies for employees are documented and communicated
- ✓Clear responsibility for AI compliance and risk monitoring
- ✓AI systems are reviewed for bias, explainability, and data protection
- ✓EU AI Act requirements are known and actively tracked
All 6 Dimensions at a Glance
How does your organisation score?
Get your free AIQ score — in 5 minutes you'll know where you stand across all 6 dimensions.
Take the free test