← Back to all entries
2026-02-09 🧭 Daily News

Cowork Office Suite Preview & Attribution Graph Interpretability Advances

Cowork Office Suite Preview & Attribution Graph Interpretability Advances — visual for 2026-02-09

🧭 Cowork Excel & PowerPoint Integrations Previewed — GA Coming Later This Month

Anthropic has previewed native Office suite integrations for Claude Cowork ahead of their scheduled general availability. Excel integration powers native spreadsheet operations — pivot tables, conditional formatting, formula generation, and data transformation — directly within the Cowork interface, with Opus 4.6 reading from and writing back to .xlsx files without an export step. PowerPoint support allows Claude to generate and rearrange slides, update text and data labels, and apply formatting based on natural language.

Both integrations use full context sharing: Claude can cross-reference figures in a spreadsheet while drafting the corresponding slide narrative — a workflow that previously required manually copying data between applications. Anthropic says the integrations were built in response to enterprise customer requests, where Office-format deliverables are a non-negotiable output format.

Use case to try first: Build a Cowork workflow that reads a CSV of monthly metrics, generates a pivot analysis in Excel, then produces a three-slide executive summary in PowerPoint — all from a single natural-language task description. The integrated context means the charts and the narrative stay in sync automatically.

Cowork office integration Excel PowerPoint retrospective

🧭 Anthropic Interpretability Team Open-Sources Attribution Graph Tooling

Anthropic's interpretability team has released an update to their attribution graph tooling, extending the circuit-tracing methodology to production-scale models and open-sourcing a library for generating attribution graphs on popular open-weights models. Attribution graphs trace the internal path a model takes from a given prompt to its output, surfacing which feature circuits activate and in what order — making model reasoning more observable.

A visualisation and annotation frontend is hosted at Neuronpedia, allowing researchers to explore circuit activations without writing code. The release is part of Anthropic's stated interpretability roadmap: reaching a state where "interpretability can reliably detect most model problems" by 2027 — a milestone the company views as foundational to deploying highly capable agentic models safely.

Why this matters for developers: Attribution graphs are not yet a production debugging tool, but they represent the research foundation for it. As agentic systems become more autonomous, having interpretable reasoning paths will be essential for auditing agent decisions — particularly in regulated industries where "the model decided" is not a sufficient explanation.

safety interpretability research open source retrospective