Dev Log: February 6, 2026
courses
Analyzed the Phase E experiment results from FarmShare, diagnosing two distinct failure modes across the 45-experiment SLURM array. The low-load experiments OOM’d while the high-load ones hit wall-clock timeouts, but enough data came through to confirm FGD’s JCT advantage across all load levels. Then pivoted to getting the cluster visualization tool working at Alibaba scale, which required rethinking the heatmap renderer, adding a tab-based UI, building wrapping GPU grids for fragmentation visibility, and implementing FGD-style fragmentation metrics as new chart types.
Two failure modes:
- Experiments 0-14 (60 jph): FAILED after ~1.5 hrs — OOM killed. Memory grew to 3-5.5 GB as the simulation accumulated state over 500+ simulated hours. The OOM killer terminated them silently (exit code 1).
- Experiments 15-44 (180 jph + 360 jph): TIMEOUT after 4:00:01 — hit the SLURM wall-clock limit. These are higher-load experiments where each scheduling round takes longer (larger LP problems), so simulated time advances more slowly per wall-clock hour.
- 60 jph experiments (0-14): All OOM-killed after ~1.5 hrs wall time at 500+ simulated hours. Window completion is 995-997/1000 — essentially done. The JCT values are reliable.
- 180 jph experiments (15-29): All hit the 4-hour SLURM timeout. Window completion varies: strided configs got to ~970/1000, but FGD only reached 900/1000 despite running for less simulated time. This is because FGD’s placement decisions are more compute-intensive per scheduling round, so wall-clock time runs out faster even though simulated time is lower.
- 360 jph experiments (30-44): Same TIMEOUT pattern, but with even less window coverage (817-907/1000). The cluster is near saturation at 96-99% utilization.
- Key result signal: Despite incomplete windows, FGD consistently shows the lowest JCT across all load levels. At 180 jph: FGD=37,781s vs baseline=58,900s (36% reduction). At 360 jph: FGD=27,563s vs baseline=40,467s (32% reduction). The JCT advantage is clear even from partial data.
The viz tool works by parsing .log files (scheduler debug logs with allocation/completion/telemetry events) into .viz.bin binary files, which are then served as a static web app. The preprocessor needs the full scheduler log (not just the SLURM stderr), which means the experiments need --save-logs to generate per-experiment .log files. However, the SLURM stderr is the scheduler log — it has all the same EVENT lines. The issue is the viz tool’s --cluster flag expects the old k80:p100:v100 3-type format, but Alibaba has 6 GPU types.
The binary format and preprocessor are actually fully generic — num_gpu_types is a variable in the header, and gpu_types are passed via the config dict. The only hardcoded part is the --cluster CLI arg which parses k80:p100:v100 format. The preprocessor just needs to be called with the right cluster spec and GPU type names for Alibaba’s 6 types. The binary format will handle any number of GPU types.
However, there’s a scale problem: the Alibaba cluster has 6,200 GPUs total. Each round record stores a uint32 per GPU for allocations (4 * 6200 = 24.8 KB per round). With 5000+ rounds, that’s 124 MB per .viz.bin file. The logs themselves are 29M+ lines. This will be very slow to process and too large to serve.
The GPU heatmap (per-GPU allocation grid) is fundamentally incompatible with Alibaba-scale clusters. At 28px per GPU cell, 4,392 G2 GPUs alone would need a 123,000px-wide canvas — browsers cap canvas dimensions around 16,384px. But the timeseries charts (utilization, JCT, queue depth, completion rate) work purely from telemetry data and don’t care about individual GPU state. These are what we should generate.
- Renderer scaling: With 4,392 G2 GPUs and a 1200px target width,
idealWidth = 1200/4392 = 0.27px. The renderer will usecellWidth=1, cellGap=0, cellHeight=2— each GPU becomes a single colored pixel. Rows of different GPU types will have visibly different lengths (G2: 4392px vs V100M16: 192px), creating a natural visual encoding of cluster heterogeneity. - Canvas dimensions: At 1px per GPU, the canvas is
60 + 4392 = 4452pxwide, which exceeds the viewport but is scrollable. The CSS likely constrains the display width, so the canvas will render at the CSS size and the browser scales it. This naturally anti-aliases the pixel dots into a smooth density map. - Backwards compatibility: The legacy
36:36:36format still works — small clusters get the original 28px cells with text labels. TheshowLabelsflag disables text rendering when cells are too small to read.
121 MB for a single experiment’s viz file! That’s because each round stores 4 bytes * 6200 GPUs = 24.8 KB of allocation data, even though they’re all zeros (we don’t have per-GPU data from telemetry alone). Over 5,114 rounds that’s 5114 * 24800 = 127 MB. The timeseries charts will work fine, but the heatmap will just show all-empty GPUs since we only have aggregate stats, not per-GPU allocations.
For the heatmap to show actual job placements, we’ll need the [Micro-task scheduled] lines from the full SLURM logs — which requires FarmShare.
The viz computes the “moving JCT” chart from individual job records (sim.jobs with completionRound and duration). Since we passed an empty jobs list, completedJobs is empty, so the JCT chart shows nothing and the completed count stays 0. The round-level avg_jct from telemetry is already in the data but the viz prefers per-job granularity for the moving-window calculation.
The CDF chart normally builds a kernel density estimate from hundreds of individual job durations to show the full distribution shape. With only a single avg_jct value as fallback, the CDF will just be a vertical step function at that point — not a real distribution. It’ll at least show something and let you compare the two experiments’ average JCTs visually. For a proper CDF, we’d need the per-job completion data from the full logs.
CSS [hidden] override pattern: The HTML hidden attribute sets display: none by default, but we still need .tab-bar[hidden] { display: none } because our .tab-bar class sets display: flex which would override the default hidden behavior. This is a common gotcha: any CSS display property overrides the hidden attribute’s built-in display: none.
Single-sim scoping with descendant selectors: Using #tab-heatmap.single-sim #sim-1 ensures the full-width styles only apply when the .single-sim class is present. When a second sim loads and .single-sim is removed, everything reverts to the default side-by-side layout with no extra cleanup needed.
Re-render on tab switch: When the user is on the Charts tab, the heatmap canvas is display: none. Canvas draws still execute but the browser may skip compositing. More importantly, if the canvas container was resized or the allocation data changed while hidden, the visual state would be stale. Calling renderFull() on the heatmap renderers when switching to the Heatmap tab ensures the canvas always shows the current round’s data.
Why default to Charts tab: For two-sim comparison (the auto-loaded default), the Charts tab gives the most useful overview — you can see utilization, JCT, and queue trends for both sims overlaid. The heatmap is more useful for spatial analysis of individual experiments. Defaulting to Charts means the user sees the most information-dense view first.
Incremental hover highlighting: The old code called renderFull() on every hover change, which does: (1) clearRect the entire canvas, (2) redraw all row labels, (3) redraw every GPU cell. For 6200 GPUs that’s ~6200 fillRect + possible fillText calls per mouse move.
The new _repaintJobCells(jobId) scans prevAllocations for cells matching the job ID and repaints only those. A typical job uses 1-8 GPUs, so we go from ~6200 draws to ~16 draws (old + new job cells). The _renderCell method already handles the highlight border logic based on this.hoveredJobId, so cells for the old job lose their border and cells for the new job gain one.
Why we don’t need clearRect per cell: _renderCell starts with fillRect which completely overwrites the cell area, including any old stroke borders. The stroke from the old highlight is painted inside the cell bounds (offset by 1px), so the new fillRect covers it entirely.
One-shot flag pattern: The defaultsLoaded flag is set once after auto-load and cleared the first time the user loads a file. After that, loading a second file into the other slot works normally for manual comparison. This avoids the trap of clearing the user’s own Sim 1 when they load Sim 2 — the flag only triggers once to dismiss the demo data.
Clearing the “other” slot: The loop for (let i = 0; i < 2; i++) if (i !== simIndex) clears whichever default-loaded sim isn’t being replaced by the user’s file. model.clearSimulation(i) fires the simulationCleared event, which the controller’s existing handler uses to hide the section, null out renderers/buffers/chart data, and update the tab layout to single-sim mode.
Why wrapping grid instead of a single long row: The Alibaba cluster has 4392 G2 GPUs. At any visible cell size, a single row would be thousands of pixels wide, requiring horizontal scrolling and making it impossible to see the full allocation pattern at once. A wrapping layout (like text in a paragraph) fits all GPUs of a type into a compact block that reveals fragmentation patterns — idle “holes” scattered throughout the grid are immediately visible as dark spots.
Why custom canvas over a library: Standard heatmap libraries (Plotly, D3 heatmap) expect a 2D matrix. Our data is variable-width rows of GPUs grouped by type, with wrapping. Custom canvas gives us exact control over the layout while staying performant at 6200+ cells.
Wrapping grid for fragmentation visibility: The key design choice here is using a wrapping layout (like text reflow) instead of a single long row per GPU type. For a cluster with 4392 G2 GPUs, a single row would need horizontal scrolling and you’d never see the full pattern. The wrapping grid fills the viewport width, so at “Fit” zoom you see every GPU as a colored dot. Fragmentation (idle gaps scattered among allocated blocks) becomes immediately visible as dark spots in the grid — like a visual “Swiss cheese” pattern.
Live update while modal is open: The modal receives update(allocations) calls from _onRoundChanged, so you can scrub the timeline or play back while viewing the expanded heatmap. Playback controls (Space, arrows, Home/End) still work when the modal is open, but tab-switching keys (1/2) are blocked to avoid confusing state changes behind the modal.
DPR-aware rendering: The canvas uses window.devicePixelRatio to render at native resolution on Retina/HiDPI displays. Without this, the heatmap would look blurry at small cell sizes. The pattern: set canvas.width to logicalWidth * dpr, set canvas.style.width to logicalWidth + 'px', then ctx.setTransform(dpr, 0, 0, dpr, 0, 0) so all drawing coordinates remain in logical pixels.
Why DOM-based bars instead of another canvas: The stacked bars have a small, fixed number of elements (one bar per GPU type, a handful of job-category segments each). DOM divs with percentage widths are perfect here — they’re trivially responsive, get hardware-accelerated rendering, and CSS transitions can animate changes between rounds. The old canvas approach was needed because we were drawing thousands of individual cells; with bars, we have maybe 6 rows with ~10 segments each.
Data flow for bars: roundData.allocations is a flat array indexed by global GPU index. config.gpu_types tells us ranges: if G2 has 4392 GPUs and T4 has 840, then indices 0-4391 are G2 and 4392-5231 are T4. For each type’s range, we count how many GPUs are allocated to each job category, then render proportional segments.
O(1) job lookups via lazy Map: The original code used sim.jobs.find() (O(n) linear scan) for every GPU in every round. With 6200 GPUs and potentially hundreds of jobs, that’s millions of comparisons per scrub. Building a Map<jobId, job> once and using .get() drops this to O(1) per GPU. The _jobMap and _typeMap are built lazily on first render and cached on the container element.
DOM reuse for bar segments: Instead of rebuilding all segment divs each round, _updateAllocBars reuses existing <div> elements and only creates new ones when needed (via track.children[segIdx]). Extra segments from previous rounds are removed. This avoids DOM churn and keeps event listeners intact. The segments also get CSS transition: width 0.15s for smooth animation when scrubbing.
Sorting by count for stability: Categories are sorted largest-first, so the visual ordering stays consistent as you scrub. Without this, segments would jump around as relative sizes change between rounds.
Tab UI architecture: The current layout places charts, heatmaps, and queue panels all in the same scroll flow. By introducing tabs, we separate the “aggregate time-series analysis” view (Charts) from the “per-GPU spatial view” (Heatmap & Metrics). The shared scrubber stays above both tabs so timeline position is preserved across tab switches — this is important because the scrubber drives model.currentRound which both views consume.
Why incremental hover matters: The current _handleHover calls renderFull() which does clearRect on the entire canvas, then redraws every cell. For Alibaba-scale clusters (6200 GPUs), that’s thousands of fillRect calls per mouse move. The fix uses a dirty-set approach: only repaint cells belonging to the old/new highlighted job.
Data availability matters for metric computation. The FGD paper’s fragmentation metrics (Fig 7/9) are deeply tied to a per-node model — servers with multiple GPU slots, CPU/memory constraints, and a workload popularity distribution. The viz tool, however, has a flat per-GPU allocation array (one job ID per GPU slot) and GPU types with counts. This means we can compute some metrics directly (Unallocated %, fragmentation approximations) but others that depend on node topology or CPU/memory constraints (non-GPU fragmentation, stranded vs. deficient breakdown) require either: (a) adding node topology info to the binary format, or (b) approximation heuristics.
Architecture overview: The visualizer pipeline works as: Python preprocessor (preprocess_viz.py) parses simulation logs into a binary .viz.bin format, which the JS decoder reads into memory. viz.js (the Controller) orchestrates everything — it precomputes chart data arrays once on load, then the TimeSeriesChart and CDFChart classes handle rendering. Adding new charts means: (1) adding data to the precompute step, (2) creating chart instances, and (3) wiring them into the render loop.
FGD simplification for Gavel: The full FGD paper handles CPU, memory, fractional GPUs, and GPU-type constraints. But in Gavel’s model, jobs only request whole GPUs with no CPU/memory constraints, so the fragmentation formula simplifies dramatically: a node’s fragmentation for a task is either freeGpus (if the task can’t fit) or 0 (if it can).
Key simplification: The full FGD paper’s computeNodeFragmentation checks CPU, memory, GPU type compatibility, and fractional GPU availability. In Gavel’s model, jobs only request whole GPUs with no other resource constraints, so the check reduces to a single comparison: gpuRequest > node.freeGpus. If the task doesn’t fit, all free GPUs on that node are “stranded” (fragmented). This makes the O(nodes * workload_tasks) computation very fast since each inner operation is just a comparison and conditional add.
The occupiedNodes metric (FGD Fig 9b) captures how spread out the workload is across the cluster. Higher occupied nodes with low utilization means more scattered allocations, which correlates with more fragmentation.
Stacked area chart rendering: The key trick is maintaining a _stackBase array that accumulates values across stacked series. Each series draws its filled area between stackBase[i] (bottom edge) and stackBase[i] + values[i] (top edge), then updates stackBase for the next series. This ensures areas stack visually without overlapping. The _computeRange also needs to account for this accumulation so the y-axis scales to fit the total stacked height, not individual series.
Tooltip consideration: When hovering, we show the raw (individual) value per series in the tooltip (e.g., “4-GPU: 120 GPUs”), not the accumulated value. But the dot position on the chart must be at the accumulated height so it sits on the visible area edge.
Backward compatibility decision: Rather than having old .viz.bin files show empty fragmentation charts, we default gpus_per_node to 1, making every GPU its own “node.” This means: 1-GPU tasks never cause fragmentation (they always fit in a 1-GPU node), but multi-GPU tasks see 100% of idle GPUs as fragmented (since no 1-GPU “node” can host a 2+ GPU task). This is technically correct per the FGD formula and still provides useful signal about how multi-GPU jobs struggle with fragmentation.
Node topology matters: The small 36:36:36 cluster uses 4 GPUs per server (from phase_e.json), while the Alibaba 6200-GPU cluster uses 8 per server (from phase_e_alibaba.json). This directly affects fragmentation calculations — with 4 GPUs/node, a node with 3 free GPUs can’t fit a 4-GPU job, stranding those 3 GPUs. With 8 GPUs/node, the fragmentation patterns are even more dramatic since an 8-GPU job needs an entirely free node.
Node topology in the binary format: The gpus_per_node value is stored in the config JSON section of the .viz.bin file (not in the binary header), so adding it required no changes to the binary format itself. The JS decoder already parses the config JSON flexibly, meaning this new field is automatically available to the fragmentation calculator without any decoder changes. This is the benefit of using JSON for extensible config while keeping the fixed-size binary format for per-round data.
Alibaba cluster at 8 GPUs/node: 6,200 GPUs become 775 nodes. With the fragmentation calculator doing O(nodes * task_types) per round, that’s ~4,650 operations per round — trivial for the JS engine even across thousands of rounds.
Responsive grid strategy: Rather than using auto-fit with minmax() (which can produce unpredictable column counts), explicit breakpoints give cleaner control: 3 columns above 1200px (wide landscape), 2 columns between 600-1200px (portrait or smaller landscape), and 1 column below 600px. The chart-container canvas already has width: 100%, so canvases will fill whatever column width they get. The canvas width attribute (570) sets the internal resolution, while CSS width: 100% scales it to fit.
career
Reviewed job postings for an Engineering Program Manager role at Apple (Retail & Marcom Engineering) and a Reliability Engineering TPM role at Anthropic. Tailored resumes for both, differentiating the narrative between “shipping new things” and “keeping things running.”
This is a job posting for an Engineering Program Manager role at Apple’s Retail & Marcom Engineering team. The role focuses on leading strategic, multi-year programs involving both software and hardware solutions. Key themes: cross-functional leadership, structuring ambiguity, and balancing technical depth with business strategy.
This is a Reliability Engineering TPM role at Anthropic — a specialized niche combining incident management program ownership with traditional TPM skills. The JD emphasizes operational excellence and learning from incidents, which signals they want someone who thinks systemically about reliability, not just project-manages engineering work.
Why this role is different from the Launches TPM you already applied for:
- The Launches role is about shipping new things (model releases, product coordination, GTM)
- The Reliability role is about keeping things running (incident management, operational excellence, learning from failures)
- The key signal: this role has on-call responsibilities and reports into a team led by Todd Underwood (ex-Google SRE founder of ML SRE, ex-OpenAI)
- Anthropic’s November 2024 “silent degradation” incident — where Claude ran degraded for a month with all dashboards green — is the defining challenge this role was created to solve
Key design decisions in this resume:
- Consolidated your two Microsoft roles (PM2 + Senior TPM) into a single “Senior Engineering Program Manager” entry spanning 2021-Present. This creates a 4+ year multi-year narrative that directly mirrors Apple’s emphasis on “multi-year initiatives.”
- Included Applied Materials (2010-2011) specifically for the hardware angle — Apple’s JD mentions “software and hardware solutions” and your semiconductor manufacturing experience is a genuine differentiator most TPM candidates lack.
- Every bullet was reframed from Azure/Microsoft-specific jargon into platform-neutral language Apple’s hiring team will recognize (ROB -> “governance model”, KR slate -> “decision frameworks”, livesite -> “operational excellence”).
Content matching strategy for this role: The JD has 8 explicit responsibilities. I’m pulling the best version of each matching bullet from across all 42 resumes, then reframing the language from “infrastructure launches” and “platform performance” into “incident management program ownership” and “reliability improvement.” Same experiences, different emphasis. The Security TPM, Airbnb, MongoDB, and DoorDash resumes have the strongest incident/reliability framing of your bullets.
Key differentiation from your Launches resume: This resume tells a fundamentally different story with the same experiences. The Launches resume frames you as someone who ships new things (AKS SLA launch, SKU framework, Go/No-Go gates). This Reliability resume frames you as someone who keeps things running (incident response, MTTR reduction, playbooks, detection frameworks). The role title changed from “Infrastructure Engineering” to “Platform Reliability & Incident Management,” and the lead skill category shifted from “Launch Operations” to “Incident Management & Reliability.” Same person, completely different narrative optimized for a different hiring manager’s priorities.
The prep report includes a draft “Why Anthropic?” response that references their November 2024 silent degradation incident and connects your Project Mukti work directly to the exact problem they’re trying to solve. Worth reviewing before writing your application response.
openclaw
Fixed broken podcast episode titles in the existing RSS feed and deployed a robust title extraction cascade to prevent recurrence. Added suspect-title warnings to Signal notifications and wrote a comprehensive test suite for the extraction logic.
Two layers of the fix:
- Retroactive (just done): Patched the 6 broken episodes in the existing feed.xml by looking up titles via NotebookLM’s fulltext API and re-uploading.
- Prospective (committed earlier): The new
_title_from_fulltext()function parses the first lines of document content for future episodes, so this problem won’t recur. The 6-step cascade with verbose logging means we can diagnose any future title failures from the journalctl logs.
Your podcast app should pick up the corrected titles on its next feed refresh (typically within an hour, depending on the app).
Three layers of protection:
-
6-step title cascade (already deployed) — parses fulltext content, AI guide summary, notebook title, source title, notebook description, and URL cleanup. The fulltext content parsing alone handles all academic PDFs we tested. The chain never falls through to “Untitled Episode” anymore.
-
22 unit tests — cover every title extraction helper with real-world patterns (academic paper content, URLs, filenames, author lines, markdown bold). If anyone modifies
_title_from_fulltextand breaks it, the tests catch it immediately. -
Suspect-title warning in Signal — if a title still looks bad after the cascade (URL, single word, “Untitled Episode”), your Signal notification will include a warning flag so you know to manually fix it before it sits in your podcast app with a bad name.
The verbose _log() statements at every cascade step also mean you can diagnose any future issues with journalctl --user -u article-podcast-worker.service — each step reports exactly what it tried and what it got.
personal-finance
Built a Sankey diagram view for the budget flow tool, adding d3-sankey-powered income-to-spending visualizations with tab switching, previous-year/YTD toggles, and interactive tooltips. Discovered and fixed a data accuracy issue where YNAB inflows inflated the gross income node. Also explored Fidelity API access for direct brokerage data integration.
Fidelity (like most brokerages) uses the “Durable Data API” open standard for data sharing, but only with vetted aggregation partners — not individual developers. This is common across the brokerage industry. The practical architecture for personal finance tools is: Brokerage -> Aggregator (Plaid/MX/Finicity) -> App (YNAB) -> Your tool (via YNAB API). Each layer adds latency and potential breakage, but avoids the need for direct brokerage API access.
The d3-sankey library works by computing node positions and link paths from a graph of {source, target, value} links. It uses an iterative relaxation algorithm to minimize link crossings. We’ll use the individual d3 packages (d3-sankey, d3-selection, etc.) instead of the monolithic d3 package for better tree-shaking with Bun’s bundler.
The key HTML change is wrapping the table inside a #tabs-wrapper that’s hidden until data loads, then using data-tab attributes on buttons for clean tab switching. The existing #flow-table no longer needs its own display:none since the tab panel handles visibility. The tooltip is placed outside the container so it can use position: fixed without clipping issues.
The data transformation is the key challenge here. The FlowLine[] array is linear, but the Sankey needs a directed acyclic graph. The approach:
- Extract totals from specific labels (“Total Gross Income”, “Total Taxes”, etc.) for Level 1 links
- Use the
section === "SPENDING"lines for Level 2 group totals - Use
section === "SPENDING_DETAIL"with thesourcefield (e.g., “YNAB -> P0 Expenses”) to map categories back to their parent spending group for Level 3 - Skip zero/negative values since Sankey links must be positive
- The
modeparameter switches betweenline.prevandline.actualto toggle views
Key design decisions in the app.ts refactor:
- Renamed
rendertorenderTable- clearer naming now that we have two render paths - Lazy Sankey rendering - the Sankey only renders when its tab is first activated, avoiding unnecessary work on initial load
- Debounced resize - 200ms debounce prevents expensive SVG redraws during window dragging
- State management -
sankeyRenderedflag tracks whether the Sankey needs a redraw when switching tabs after a data refresh
Here’s what was implemented:
src/webapp/sankey.ts(new, ~190 lines) — Transforms FlowLine[] data into a directed acyclic graph of nodes/links, then renders an SVG Sankey diagram using d3-sankey. Handles two modes: “prev” (full year with Gross Income as root) and “ytd” (partial year starting from Available when income actuals are missing). Uses gradient-colored links, colored nodes by category type, and DOM-safe tooltips.src/webapp/flow.html(modified) — Added tab bar with “Flow Table” and “Sankey Diagram” tabs, a#sankey-tabpanel with YTD/Previous Year toggle, tooltip element, and all necessary CSS for tabs, toggles, Sankey labels, and tooltips.src/webapp/app.ts(modified) — Added tab switching logic, lazy Sankey rendering on tab activation, prev/ytd toggle handlers, debounced resize re-rendering, and imported the Sankey module.package.json(modified) — Added d3-sankey, d3-selection, d3-array, d3-shape and their type packages.
Found the bug. The Sankey links Gross Income -> Taxes ($100K) + Deductions ($122K) + Available ($230K) sum to $452K, but actual gross is only $391K. The problem: “Available” comes from YNAB inflows which includes non-employment income (~$61K from things like ESPP sales, investment proceeds, etc.). d3-sankey sizes the node by the sum of its outflows, so Gross Income appears inflated.
Fix: Split the flow so that Gross Income -> Net Pay (computed), then have Net Pay and Other Income both feed into Available. This keeps the Sankey values accurate.
The key fix: YNAB “Available” (inflows) includes non-payroll deposits like ESPP sales and investment proceeds. By introducing a “Net Pay” intermediate node (Gross - Taxes - Deductions = $168K) and an “Other Income” node ($61K), the Sankey accurately represents both income sources feeding into the household cash flow. This also surfaces an interesting insight for the user — $61K of their spending was funded by investment liquidations, not just employment income.