Skip to content

Linking (A/S/I Convenience)

1. Calculation Name

Allocation/Selection/Interaction Convenience Linking

2. Description and Mathematical Formula

This plugin accepts period-by-period A/S/I contributions (optionally by group) and outputs:

  • Normalized raw contributions that reconcile to active return
  • Menchero-style scaling coefficients \( k_t = \frac{\ln(1 + r^P_t) - \ln(1 + r^B_t)}{r^P_t - r^B_t} \) for each period
  • Horizon-linked contributions using a global scaling factor so the sum equals the geometric active return

Group handling: totals marked as TOTAL (or a custom label) are stripped once group-level rows are available, ensuring that linked contributions add up cleanly.

3. Input Sample Data

Date Portfolio Return Benchmark Return Allocation (raw) Selection (raw) Interaction (raw) Group
2024-01-31 2.50% 2.20% 0.06% 0.10% 0.02% Rates
2024-01-31 2.50% 2.20% 0.04% 0.03% 0.05% Spread
2024-01-31 2.50% 2.20% 0.10% 0.15% 0.07% TOTAL
2024-02-29 1.80% 1.50% 0.07% 0.11% 0.03% Rates
2024-02-29 1.80% 1.50% 0.05% 0.04% 0.00% Spread
2024-02-29 1.80% 1.50% 0.12% 0.15% 0.03% TOTAL
2024-03-31 -0.50% -0.80% 0.05% 0.12% 0.04% Rates
2024-03-31 -0.50% -0.80% 0.03% 0.04% 0.02% Spread
2024-03-31 -0.50% -0.80% 0.08% 0.16% 0.06% TOTAL

4. Mathematical Solution

  1. Remove total rows once group rows exist for the same date.
  2. Normalise raw A/S/I contributions so Rate + Spread equals the active return each month (0.30%).
  3. Compute per-period coefficients \( k_t \) using the logarithmic ratio shown above.
  4. Apply the global scaling factor so the sum of linked effects equals the relative return \((1+R_P)/(1+R_B)-1\).
  5. Sum across periods per group to obtain horizon-linked results.

The final linked contributions mirror the BF Link output:

  • Rates: Allocation 18.00 bps, Selection 33.01 bps, Interaction 9.01 bps
  • Spread: Allocation 11.99 bps, Selection 11.00 bps, Interaction 6.99 bps
  • Residual ≈ 0.00 bps (within 0.1 bps tolerance)

5. Sample Python and R Code

import pandas as pd
import numpy as np

data = pd.DataFrame(
    {
        "date": pd.to_datetime(
            ["2024-01-31", "2024-01-31", "2024-01-31",
             "2024-02-29", "2024-02-29", "2024-02-29",
             "2024-03-31", "2024-03-31", "2024-03-31"]
        ),
        "group": ["Rates", "Spread", "TOTAL"] * 3,
        "rp": [0.025, 0.025, 0.025, 0.018, 0.018, 0.018, -0.005, -0.005, -0.005],
        "rb": [0.022, 0.022, 0.022, 0.015, 0.015, 0.015, -0.008, -0.008, -0.008],
        "alloc": [0.0006, 0.0004, 0.0010, 0.0007, 0.0005, 0.0012, 0.0005, 0.0003, 0.0008],
        "select": [0.0010, 0.0003, 0.0015, 0.0011, 0.0004, 0.0015, 0.0012, 0.0004, 0.0016],
        "inter": [0.0002, 0.0005, 0.0007, 0.0003, 0.0000, 0.0003, 0.0004, 0.0002, 0.0006],
    }
)

def drop_totals(df):
    mask = df.groupby("date")["group"].transform(lambda g: g.str.upper().isin(["TOTAL", "TOTALS"]))
    has_detail = df.groupby("date")["group"].transform(lambda g: (g.str.upper() != "TOTAL").any())
    return df[~(mask & has_detail)]

filtered = drop_totals(data)

def normalize(group):
    active = group["rp"].iloc[0] - group["rb"].iloc[0]
    total_raw = (group["alloc"] + group["select"] + group["inter"]).sum()
    scale = active / total_raw if total_raw else 0
    return group.assign(
        alloc_norm=group["alloc"] * scale,
        select_norm=group["select"] * scale,
        inter_norm=group["inter"] * scale,
    )

normalized = filtered.groupby("date", group_keys=False).apply(normalize)
normalized["k"] = (np.log1p(normalized["rp"]) - np.log1p(normalized["rb"])) / (
    normalized["rp"] - normalized["rb"]
)

per_period = normalized.drop_duplicates("date")[["date", "rp", "rb", "k"]]
active_scaled = (per_period["k"] * (per_period["rp"] - per_period["rb"])).sum()
cum_active = np.prod(1 + per_period["rp"]) / np.prod(1 + per_period["rb"]) - 1
scale_factor = cum_active / active_scaled

normalized["alloc_linked"] = normalized["alloc_norm"] * normalized["k"] * scale_factor
normalized["select_linked"] = normalized["select_norm"] * normalized["k"] * scale_factor
normalized["inter_linked"] = normalized["inter_norm"] * normalized["k"] * scale_factor

totals = (
    normalized.groupby("group")[["alloc_linked", "select_linked", "inter_linked"]]
    .sum()
    .mul(10000)
)
print(totals.round(2))
library(dplyr)

data <- tibble::tibble(
  date = rep(as.Date(c("2024-01-31", "2024-02-29", "2024-03-31")), each = 3),
  group = rep(c("Rates", "Spread", "TOTAL"), 3),
  rp = rep(c(0.025, 0.018, -0.005), each = 3),
  rb = rep(c(0.022, 0.015, -0.008), each = 3),
  alloc = c(0.0006, 0.0004, 0.0010, 0.0007, 0.0005, 0.0012, 0.0005, 0.0003, 0.0008),
  select = c(0.0010, 0.0003, 0.0015, 0.0011, 0.0004, 0.0015, 0.0012, 0.0004, 0.0016),
  inter = c(0.0002, 0.0005, 0.0007, 0.0003, 0.0000, 0.0003, 0.0004, 0.0002, 0.0006)
)

filtered <- data %>%
  group_by(date) %>%
  filter(!(toupper(group) == "TOTAL" & any(toupper(group) != "TOTAL"))) %>%
  ungroup()

normalized <- filtered %>%
  group_by(date) %>%
  mutate(
    active = first(rp) - first(rb),
    total_raw = sum(alloc + select + inter),
    scale = ifelse(total_raw == 0, 0, active / total_raw),
    alloc_norm = alloc * scale,
    select_norm = select * scale,
    inter_norm = inter * scale,
    k = (log1p(rp) - log1p(rb)) / (rp - rb)
  ) %>%
  ungroup()

periods <- normalized %>% distinct(date, rp, rb, k)
active_scaled <- sum(periods$k * (periods$rp - periods$rb))
cum_active <- prod(1 + periods$rp) / prod(1 + periods$rb) - 1
scale_factor <- cum_active / active_scaled

linked <- normalized %>%
  mutate(
    alloc_linked = alloc_norm * k * scale_factor,
    select_linked = select_norm * k * scale_factor,
    inter_linked = inter_norm * k * scale_factor
  ) %>%
  group_by(group) %>%
  summarise(
    alloc_bps = sum(alloc_linked) * 10000,
    select_bps = sum(select_linked) * 10000,
    inter_bps = sum(inter_linked) * 10000,
    .groups = "drop"
  )
linked

6. Output Table

Group Allocation (bps) Selection (bps) Interaction (bps) Total (bps)
Rates 18.00 33.01 9.01 60.02
Spread 11.99 10.99 6.99 29.97
Total 29.99 44.00 16.00 90.00

7. Conclusion

FinFacts’ A/S/I convenience plugin is ideal when upstream systems already compute raw contributions. This template shows how to feed those numbers into the desktop app (or documentation pipelines) and cross-check the linked output.