R&D Tax Credits 101: What actually counts as R&D for SaaS companies

R&D-tax-credits-SaaS-eligibility-guide

If it felt like a straight implementation, it probably isn’t R&D – if the team had to invent its way out, it might be.

Overview

  • Qualifying R&D requires a real technical uncertainty, a measurable technological advance, and a systematic investigation – prove it with evidence over opinion.
  • Include work directly tied to resolving uncertainty (design, prototyping, testing, targeted DevOps); exclude routine build, cosmetic changes, and like‑for‑like implementations.
  • Map costs cleanly (staff, subcontractors/EPWs, cloud/software/data used for experiments), and maintain a consistent audit trail across engineering and finance.

What HMRC means in practice (translated for SaaS)

R&D for tax purposes is about solving a technical problem where a competent professional could not readily work out the solution at the outset. It’s not about buzzwords; it’s about demonstrating that the team tackled a non‑obvious challenge and advanced capability in the underlying technology – not just adding a product feature.

Three anchors to align on from the start:

  • Technical uncertainty: The solution was not obvious at the outset to someone skilled in the field. Normal engineering effort is not enough; there must be genuine uncertainty in achieving the result or how to achieve it.
  • Technological advance: The work moved capability forward-performance, scalability, accuracy, reliability, interoperability, or security – within your stack or domain. It’s not a UI facelift or a well‑documented integration.
  • Systematic investigation: Deliberate, recorded exploration of options. Design alternatives, prototypes, experiments, failure analysis, benchmarks, and decisions – captured as part of the work, not re‑written after the fact.

Evidence over opinion. The best claims read like good engineering: clear problem, constraints, options, trials, failures, and a defensible solution.


SaaS examples: Qualifies vs. doesn’t

Qualifies (when tied to uncertainty and a systematic approach):

  • Stabilising tail latency at P99.9 under spiky, unpredictable loads by designing a novel distributed cache strategy, evaluating and instrumenting multiple architectures before converging.
  • Enabling accurate semantic search at scale by evaluating vector databases and index structures, quantifying retrieval performance and latency trade‑offs across large, shifting corpora.
  • Achieving multi‑tenant isolation with strict performance and compliance constraints by developing a new routing/queuing or data partitioning approach where established patterns fail to meet targets.
  • Zero‑downtime migration of a stateful service with unknown consistency or ordering impacts, requiring bespoke coordination, roll‑back semantics, and verification mechanisms.

Doesn’t (typically out of scope):

  • Reskinning or polishing UI, accessibility tweaks without deeper technical uncertainty, content or copy changes.
  • Swapping in a well‑documented library or cloud service “as‑is” without modification or design uncertainty.
  • Routine pipeline tuning using standard, established patterns with predictable outcomes.
  • Pure production rollout, deployment, or re‑platforming where the technical pathway is settled and documented.

When in doubt, ask: What exactly was uncertain? What alternatives were considered? What failed? What measure of capability moved?


Activities and scope: What to include, what to exclude

Include (when directly tied to resolving uncertainty):

  • Research and design to tackle the uncertainty: problem framing, constraints, hypotheses, architecture options, and decision records.
  • Prototyping and experimentation: proofs of concept, spike solutions, test harnesses, and benchmark rigs.
  • Testing and measurement: functional and non‑functional tests designed to validate or invalidate technical hypotheses (performance, accuracy, reliability, security).
  • Targeted DevOps/infrastructure work needed to enable or validate experiments (e.g., bespoke observability or load generation for tail‑latency diagnostics).
  • Iterations that document dead‑ends, regressions, and course corrections.

Exclude:

  • Routine feature development or refactoring not aimed at resolving uncertainty.
  • Productionization and rollout activities unrelated to the uncertainty (monitoring setup, standard CI/CD work, typical SRE tasks).
  • Content, UI/UX polish, and routine QA that doesn’t test a technical hypothesis.
  • Business research, market analysis, and commercial strategy.

Subcontractors and externally provided workers (EPWs):

  • Ensure scope clarity: statements of work should specify the uncertain technical problem and deliverables tied to the investigation.
  • Maintain time attribution and artefacts: link external work to your epics, with access to design notes, test results, and code diffs where feasible.
  • Avoid double claims: keep clean boundaries when multiple parties collaborate.

Cost mapping for SaaS R&D

Aim for clean, defensible classification:

  • Staff costs: Salaries, employer NICs/pension where time is reasonably allocated to qualifying R&D epics. Use role clarity and documented allocations; avoid blanket percentages without support.
  • Subcontractors/EPWs: Eligible when engaged on qualifying R&D tasks; retain contracts, invoices, and time/evidence mapping.
  • Software/cloud/data: The portion that directly supports experiments and test environments (e.g., ephemeral clusters, experimentation datasets, benchmarking tools). Separate production costs from R&D environments.
  • Consumables: Data or compute consumed in experiments, where relevant; document linkage to tests and iterations.

Common misclassifications to avoid:

  • Treating production infrastructure as R&D by default.
  • Including general productivity tools without direct R&D linkage.
  • Double counting external costs across multiple entities or categories.

Evidence that wins: A lightweight workflow

Bake evidence into normal delivery to avoid retrofits:

  • Tag R&D epics clearly in the backlog and open each with a short “uncertainty statement.”
  • Capture options and dead‑ends: add a simple “Alternatives considered” section to design notes and link benchmark snapshots.
  • Instrument experiments: store test configs, datasets, seed values, and scripts used to generate results; keep before/after graphs for quick reference.
  • Write short sprint notes: what was tried, what failed, what improved, what’s next.
  • Keep decision snapshots: a 3–5 bullet “why this approach” with links to artefacts.

If an outsider can follow the breadcrumbs and see the problem, the exploration, and the result, the narrative is strong.


Red flags (and how to fix them)

  • Weak technical narrative: Replace assertions with artefacts – tickets, commits, benchmarks, test logs.
  • Over‑claiming maintenance or polish: Narrow scope to uncertainty‑linked work; split epics if needed.
  • Subcontractor duplication: Coordinate with partners to avoid overlapping claims; define who owns what.
  • Productionisation creep: Separate experiments from rollout; include only the production work essential to resolve uncertainty (e.g., live validation where simulation isn’t feasible).
  • Cloud cost bloat: Tag R&D environments; annotate experiment runs; exclude steady‑state production spend.

Fix‑before‑file: If any section relies on “we believe,” find the corresponding artefact or tighten scope.


How to identify qualifying work in the backlog

  • Scan roadmap and sprints for problems phrased as “not sure how to…” or “not sure if we can…”- these are candidates.
  • Look for performance targets with unknown paths (P99.9 latency, cold‑start time, cost/throughput at scale).
  • Flag integrations where public guidance is incomplete or unsuitable for constraints (security, compliance, data locality).
  • Highlight ML/AI initiatives where model performance, drift handling, or evaluation methods required novel approaches.

For each candidate, create an epic with:

  • Uncertainty statement: What’s unknown, why it’s hard, and what “advance” will look like.
  • Success criteria: Measurable indicators (e.g., latency distributions, accuracy metrics, error budgets).
  • Experiment plan: A few initial options to test and how results will be recorded.

Common Questions from Our Clients

  • What is “technical uncertainty” in SaaS?
    A non‑obvious engineering challenge where a competent professional could not readily work out the solution at the outset – often around performance, scale, reliability, security, or novel integrations.
  • Do cloud costs qualify?
    The portion directly supporting R&D experiments and test environments can qualify when evidenced; steady‑state production infrastructure usually does not.
  • Can testing time count?
    Yes, when tests are designed to validate technical hypotheses related to the uncertainty (e.g., performance, accuracy), not routine QA for stable builds.
  • What about ML projects?
    Qualifying work focuses on non‑obvious technical challenges – e.g., architecture choices, training/evaluation methods for hard constraints, data strategies – documented with experiments and results.

Contact Consult EFC for a FREE consultation to discuss your R&D and how you can claim. Contact us today on info@consultefc.com or fill in the form here.

Picture of Consult EFC

Consult EFC

We are a forward-thinking accountancy and financial consulting firm based in London. With over 11 years of experience in investment banking, M&A advisory, and audit, we bring a wealth of expertise to entrepreneurs, SMEs, and startups looking to scale and thrive in today’s fast-moving business landscape.

Share

Facebook
Twitter
LinkedIn
WhatsApp

Recent Posts

Interested?

Leave a Reply

Your email address will not be published. Required fields are marked *