User Acceptance Testing

UAT on S/4HANA Projects: Why It Breaks Down and What To Do About It

Sundar Padmanabhan

3/5/20256 min read

photo of white staircase
photo of white staircase

If the last article was about getting your planning right before testing starts, this one is about the phase where even well-planned programmes can come unstuck — User Acceptance Testing.

UAT has a reputation problem. Ask most project managers what UAT is for and they'll tell you it's where the business confirms the system works before go-live. That's technically correct, but it misses something important. UAT isn't just a quality gate. It's often the first time real business users have spent serious, unscripted time in the system. It's where the gap between what was built and what was expected becomes visible — sometimes for the first time. And it's the phase that carries the most political weight, because the people doing the testing are the same people who'll be using the system on day one and telling their colleagues whether this whole transformation was worth it.

I've seen UAT done well. It's not common. More often I see some version of the same story — the programme arrives at UAT already behind schedule, the business testers are reluctant or unavailable, the defect log fills up with issues that should have been caught in SIT, and the test manager is stuck between an SI defending their delivery and a business that's losing confidence. The go-live date is circled on a calendar somewhere, and everyone is watching it nervously.

None of that is inevitable. But avoiding it requires understanding why UAT fails in the first place.

Business users aren't testers. Stop expecting them to behave like testers.

This is the thing that bites most programmes and almost nobody says out loud early enough.

Your UAT participants are finance managers, procurement officers, warehouse supervisors, and HR business partners. They are subject matter experts in their processes, not in software testing methodology. They don't think in test cases. They don't naturally document what they did, what they expected, and what actually happened. When something looks wrong, their instinct is to pick up the phone and ask someone, not raise a defect ticket in your test management tool.

And honestly? That's fine. That's not a failing on their part — it's just who they are. The mistake is designing a UAT approach that assumes they'll behave like trained QA analysts, and then being frustrated when they don't.

What works is designing UAT around how business users actually think and work. That means giving them end-to-end business scenarios rather than system-level test cases. Instead of "execute transaction ME21N with purchase order type NB," you give them "you've just received an approved purchase requisition from the maintenance team — complete the process through to goods receipt." The scenario maps to something they do in real life. They can engage with it. They'll spot genuine issues because they're working in a context they understand.

It also means investing in how you onboard UAT participants before the phase starts. A two-hour system walk-through at the beginning of UAT week is not enough. People need to feel sufficiently comfortable in the Fiori environment — which, if they're coming from SAP GUI, feels genuinely foreign at first — before they can focus on validating process outcomes rather than fighting the interface.

The defect conversation nobody wants to have

Here's something that gets awkward in almost every UAT I've been involved in. Some of the defects raised aren't defects. They're change requests disguised as defects. Sometimes they're gaps in the original requirements. Sometimes they're new requirements that emerged during UAT because seeing the system working prompted the business to think more carefully about what they actually need. And sometimes — more often than people admit — they're valid defects that should have been caught in SIT but weren't.

The problem is that all of these things arrive in the defect log looking the same, and they need to be triaged very differently.

A genuine P1 defect — a system behaviour that will prevent a critical business process from operating at go-live — is non-negotiable. It gets fixed before you go live, full stop. But a change request that's been relabelled as a defect to bypass the change control process is a very different thing, and allowing it to be treated as a defect has real consequences: it inflates your defect count, distorts your RAG status, puts pressure on the SI to deliver scope they were never contracted for, and burns testing time on changes that belong in a post-go-live roadmap.

You need a clear, agreed defect classification framework before UAT starts — not just severity levels, but also a category that distinguishes genuine defects from new requirements, from configuration questions, from training issues, and from user error. And you need a triage process with the authority to make those calls quickly. Defects that sit unclassified for three days in a UAT phase kill momentum. People stop raising them because they feel like they disappear into a void.

The triage meeting is unglamorous work. Daily, usually an hour, often involving people who'd rather be elsewhere. But it's where the health of your UAT is really determined — not in the defect count on the dashboard, but in the quality of the decisions being made about each one.

When the business says "not ready" and the programme says "we're going"

This is the moment that defines a test manager's credibility, and it comes on almost every programme.

The go-live date is approaching. The programme team is declaring the system ready based on metrics — outstanding defect counts are within tolerance, test execution is at ninety-something percent, the SI has closed their punch list. But the business — or at least some part of it — is not comfortable. The finance team has a P2 defect around the intercompany reconciliation process they don't trust. The warehouse team feels like they haven't had enough time in the system. A regional manager is quietly telling their people to keep running the legacy system in parallel just in case.

Who's right?

The honest answer is that both perspectives contain something real. Programmes have to make go/no-go decisions under uncertainty — perfect readiness is never achievable, and at some point the cost of delay outweighs the risk of going live with known issues. But business unease that's been dismissed rather than resolved has a way of becoming a self-fulfilling prophecy. People who don't trust the system will avoid it, work around it, or use it incorrectly — and that creates exactly the kind of post-go-live problems the programme was hoping to avoid.

The test manager's role in this moment is not to advocate for go-live or against it. It's to ensure the decision is made with clear, honest information. What are the open defects, what is their actual business impact, and what is the workaround if they're not fixed before go-live? What specifically are the business users not comfortable with — is it a system issue, a training issue, or a change management issue? Those are different problems with different solutions. The go/no-go decision belongs to the programme sponsor and the business. Your job is to make sure that decision is based on fact rather than optimism or political pressure.

Go-live readiness should never be a surprise announcement. It should be the conclusion of a structured readiness assessment that's been tracked openly for weeks — exit criteria reviewed, business sign-off documented, risk acceptance formally recorded. When it's done that way, even difficult go-live decisions are defensible. When it's done as a last-minute call in a room full of people who've been working eighteen-hour days for three months, you're rolling the dice.

The thing that actually makes UAT work

I want to end on something simple, because I think it gets lost in all the methodology and process.

UAT works when business users feel like they're being listened to.

That sounds obvious, but in practice it's easy for a programme to create an environment where people don't feel heard. Defects raised and not acknowledged. Concerns raised in meetings and quietly parked. Business participants who feel like their job is to rubber-stamp a decision already made. When that happens, you lose the engagement that makes UAT valuable — people go through the motions, they don't raise the hard issues, and you go live carrying risk that someone in the business knew about but stopped bothering to flag.

The best UAT environments I've worked in had something in common: a daily rhythm that was predictable, a test manager who took every raised issue seriously even when the answer was "that's working as designed," and a genuine sense that the programme wanted to know what wasn't working rather than hoping nobody would notice.

That's a culture thing as much as a process thing. And it starts before UAT does — in how business users were engaged during design, in whether they felt their requirements were heard, and in whether the programme has built enough trust that people are willing to be honest when they see something that doesn't look right.

No test plan creates that culture. People do.

Sundar Padmanabhan is the founder of Experience Exchange, Sydney. He has led test programmes across S/4HANA migrations, ECC upgrades, and enterprise technology transformations for Australian government and private sector clients over more than two decades.

Running into UAT challenges on your S/4HANA programme? Sometimes a few hours with someone who's seen it before is enough to change the trajectory. Reach out →