When you join a CRD implementation already in flight, one of the first things you notice is how much time is spent on rules that everyone thought were straightforward. The compliance team has a document. The CRD configuration team has a configuration. Neither maps cleanly to the other. Three weeks of the implementation schedule evaporate while everyone figures out why.

At State Street, the scope was a multi-strategy institutional manager migrating from a legacy compliance system. The manager had approximately 200 active compliance rules across equity long/short, fixed income relative value, and a smaller macro book. On paper, the rule migration was a configuration exercise. In practice, it was a definitional exercise first — and the configuration came second.

What follows is what I learned about compliance rule mapping that I wish someone had told me at the start.

The Definitional Gap Is Always Bigger Than You Think

Most compliance rule documents are written by compliance officers, not system implementers. That is appropriate — the compliance officer owns the rule. But compliance officer language and Charles River configuration language are not the same language, and the gap between them is where implementation time disappears.

A rule that reads "no single issuer exposure above 5% of portfolio NAV" looks clear. Then you get to the configuration and find that Charles River's issuer-level aggregation can be configured to roll up securities by CUSIP, by parent entity, by Bloomberg issuer ID, or by a custom grouping. The compliance document says nothing about this because the legacy system handled it one way and everyone assumed the new system would do the same. Now you need a formal decision: which grouping definition is correct for this mandate? That decision is not a configuration decision — it is a policy decision, and it needs to go back to the compliance officer for sign-off before you can build anything.

Multiply this by 200 rules and you understand where three weeks went.

The discipline that prevents this: before a single rule gets built in CRD, every rule requires a definitional sign-off document that specifies not just what the rule is but how ambiguous terms are resolved. Issuer aggregation method. NAV calculation timing (EOD vs. real-time). Which instruments are in scope (does "portfolio" include accruals? pending settlements?). What happens on the day a rule is added to a new portfolio. These decisions need to be made and recorded before configuration starts, not discovered during testing.

CRD's Rule Engine Is More Flexible Than You Need It to Be

Charles River's compliance engine is genuinely powerful. You can build rules that reference market data, index weights, custom security attributes, portfolio-level derived metrics, and external data feeds. The flexibility is real. It is also dangerous for an implementation team under timeline pressure.

The failure mode is what I call compliance rule scope creep. You are building a concentration limit rule. The configuration team notices CRD can also calculate real-time Greeks for the options book. Someone suggests adding an options delta-adjusted exposure calculation to the concentration rule. This sounds like an enhancement — tighter risk measurement. What it actually is: scope expansion mid-implementation, requiring a new data feed, new testing, and new sign-off from a risk committee that has not been part of the compliance workstream.

The practice that contains this: every rule has a migration ticket and a scope boundary. The migration ticket says: here is what the legacy rule did. The new CRD rule must replicate that behavior exactly. Any enhancement is a separate ticket with a separate timeline. Phase one of the implementation is migration with behavioral equivalence. Enhancements come after go-live, when you have a stable baseline to enhance from.

This sounds obvious. It is violated on almost every implementation I have seen, including this one, at least initially.

Testing Compliance Rules Requires Scenarios, Not Just Data

The standard approach to compliance rule testing is to load a portfolio, run the rule, check whether it fires correctly for known-good and known-bad positions. This catches configuration errors. It does not catch definitional errors.

A definitional error looks like this: the rule fires correctly on your test portfolio, but fires incorrectly on a live portfolio because the live portfolio has a position structure your test data did not include — a convertible bond, a structured note, a currency overlay position that affects issuer-level exposure. Your test said the rule worked. The rule works on your test. The rule does not work on the thing it actually needs to work on.

The testing protocol that catches this: compliance rules are tested against scenarios built from actual portfolio exceptions pulled from the legacy system. For each rule, find a real portfolio that actually breached the rule in the past 18 months, a real portfolio that was close to the limit, and a real portfolio that was well within the limit. If your CRD configuration cannot replicate the exception history, the rule is misconfigured — even if it fires correctly on synthetic test data.

At State Street, we found three rules that passed synthetic testing and failed historical exception testing. Two were definitional — the legacy system and CRD were aggregating slightly differently. One was a legitimate CRD configuration error. Without the historical scenario requirement, those three rules would have gone live broken.

The Interaction Between Rules Is Underspecified

Compliance rules are not independent. A trade that breaches a concentration limit also affects a sector exposure rule and may affect a benchmark deviation rule. When multiple rules fire on the same trade, the order in which they fire, the way they interact with the order management workflow, and how they are presented to the trader matters.

Charles River handles rule interactions through a combination of rule ordering, severity levels, and override workflows. These parameters are configurable. They are also almost never documented in the compliance rule document, because the compliance officer thinks about rules individually, not as an interacting system.

The practical consequence: you can configure every individual rule correctly and still have a compliance workflow that does not work in production because the interaction behavior is wrong. Rules that should block an order instead warn. Rules that should generate an override request instead hard-block and require compliance officer sign-off that adds a 45-minute delay to time-sensitive trades. The compliance team approved the individual rules but never approved the interaction model — because no one showed it to them.

The fix: before UAT, run a full workflow review session with the compliance team and trading desk together. Walk through the full interaction model: here is a trade that triggers rule A and rule B simultaneously. Here is what happens. Here is the override workflow. Here is the escalation path. Does this match how you actually want to operate? This session takes half a day and saves two weeks of post-go-live remediation.

Custodian Data Matters More Than the CRD Configuration

This is the lesson that surprised me most. CRD's compliance engine is only as good as the data it is running against. At a major custodian like State Street, you have access to high-quality data — but the data structures are complex, and how you feed that data into CRD's compliance layer has significant implications for rule behavior.

The specific problem we encountered: State Street's position data included accrued income as a component of market value for fixed income positions. The compliance team's historical practice had been to exclude accrued income from concentration calculations — it is not economically relevant to a 5% issuer limit. The legacy system excluded it. No one thought to specify this in the compliance rule document because it was "obvious." CRD's default behavior included accrued income in market value calculations.

The result: a set of fixed income mandates appeared to be breaching concentration limits on the first data load, when they were not actually in breach. Discovering this, diagnosing it, and getting a formal decision from the compliance team about the correct treatment took four days and created a conversation with the portfolio managers about whether their existing positions were actually compliant — a conversation that should never have happened.

The protocol: for every external data feed into the compliance layer, document precisely what each field contains, what the legacy system included or excluded, and what CRD's default treatment is. Any mismatch requires a formal policy decision before configuration starts.

The Parallel Run Period Is Not Long Enough

Most CRD implementations include a parallel run period — a defined window where both the legacy system and CRD run simultaneously, generating compliance alerts that are compared daily. The parallel run is how you confirm that the CRD configuration produces equivalent results to the legacy system.

The standard parallel run period in the implementations I have seen is two weeks. Two weeks is not enough. The reason is statistical: two weeks of portfolio activity does not generate enough near-limit and at-limit scenarios to validate the full ruleset. A rule that handles normal-band positions correctly but mishandles edge-of-limit positions will not be caught in a two-week parallel run if none of the portfolios happen to approach the limit during those two weeks.

The minimum parallel run that actually validates a complex ruleset is four weeks, with the parallel run deliberately seeded with historical portfolios that produced exceptions in the prior 12 months — not just live activity. The historical seeding is the only way to ensure you have tested the full distribution of scenarios your rules will encounter in production.

Four weeks feels long when you are already behind schedule. It is not negotiable if you care about go-live quality. The State Street implementation ran a three-week parallel run and found a rule discrepancy in week three that required a configuration change. If the parallel run had ended at two weeks, the discrepancy would have been a production incident.

What This Means for Your CRD Implementation

The pattern across all of these lessons is the same: compliance rule mapping failures are not technical failures. They are definitional and process failures — unclear specifications, underspecified interactions, insufficient testing scenarios, and inadequate parallel run periods. CRD is capable of implementing whatever compliance framework you bring to it. The problem is that the compliance framework is rarely as well-specified as it appears when you are reading the compliance document.

The implementations that go well are the ones where the definitional work is done before the configuration work starts. That means a structured rule specification process with formal sign-off, a scope boundary that separates migration from enhancement, a testing protocol built on historical exceptions rather than synthetic data, a workflow review session that covers rule interactions rather than individual rules, and a parallel run long enough to generate meaningful coverage.

None of this is exotic. All of it requires discipline when the schedule is tight and the configuration team is eager to start building. The discipline is the implementation.

If you are scoping a CRD implementation — or troubleshooting one that is behind schedule — the place to start is the definitional layer, not the configuration layer. The compliance rules that are causing problems are almost never configured wrong. They are specified wrong, and the configuration accurately reflects a bad specification.

Free Resource

The OMS Implementation Cheat Sheet

7 mistakes that blow up CRD and AIM rollouts — practitioner-sourced, no fluff. Free PDF, delivered instantly.

No spam. Unsubscribe any time.

Free Resource

Get the companion worksheet

The seven rule categories from this article, formatted as fillable CRD restriction expression templates — with a pre-written UAT test case for each one and a pre-UAT sign-off checklist. Enter your email and we will send the PDF directly.

See what is included →
Free Tool

What will your CRD implementation actually cost?

Enter your AUM, scope, and integrations. The OMS Cost Calculator returns practitioner-anchored ranges for license fees, professional services, and 36-month TCO — with a PDF breakdown emailed instantly.

Run the Cost Calculator →

Scoping a CRD implementation?

We have been through this specific set of problems. A focused conversation about your ruleset and data architecture will save weeks of implementation time.

Take the OMS Readiness Assessment OMS Implementation Playbook