<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title><![CDATA[magarcia]]></title>
        <description><![CDATA[A personal blog by Martin Garcia. Thoughts, words, and experiments about code.]]></description>
        <link>https://magarcia.io</link>
        <generator>React Router using RSS for Node.js</generator>
        <lastBuildDate>Fri, 13 Mar 2026 06:23:16 GMT</lastBuildDate>
        <atom:link href="https://magarcia.io/rss.xml" rel="self" type="application/rss+xml"/>
        <copyright><![CDATA[All rights reserved 2026, Martin Garcia]]></copyright>
        <language><![CDATA[en]]></language>
        <managingEditor><![CDATA[contact@magarcia.io (Martin Garcia)]]></managingEditor>
        <item>
            <title><![CDATA[Using Claude Code Agent Teams for Incident Investigation]]></title>
            <description><![CDATA[How I used Claude Code's agent teams to run a parallel, multi-agent production incident investigation — with one prompt and surprisingly good results.]]></description>
            <link>https://magarcia.io/using-claude-code-agent-teams-for-incident-investigation/</link>
            <guid isPermaLink="false">https://magarcia.io/using-claude-code-agent-teams-for-incident-investigation/</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[claude-code]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Mon, 02 Mar 2026 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;Last week we had a production incident at work. Services were failing, pods were
restarting, and the on-call channel was filling up fast. I decided to try
something I hadn&apos;t used in a real scenario before: Claude Code&apos;s &lt;strong&gt;agent teams&lt;/strong&gt;
feature.&lt;/p&gt;
&lt;p&gt;The result surprised me. With a single, unstructured prompt and a few MCP
integrations, Claude self-organized a parallel investigation that identified the
root cause in minutes.&lt;/p&gt;
&lt;h2&gt;Agent teams: parallel Claude sessions that talk to each other&lt;/h2&gt;
&lt;p&gt;Claude Code has an experimental feature called
&lt;a href=&quot;https://code.claude.com/docs/en/agent-teams&quot;&gt;agent teams&lt;/a&gt; that coordinates
multiple Claude Code instances. One session acts as the orchestrator and spawns
teammates, each running in its own context window on a different part of the
problem.&lt;/p&gt;
&lt;p&gt;Unlike regular subagents, which run inside a single session and report back only
to the main agent, teammates communicate with each other directly. They share a
task list, claim work, and exchange findings.&lt;/p&gt;
&lt;p&gt;This matters for incident investigation because you&apos;re typically exploring
multiple hypotheses at once: is it a deployment issue? A database problem? An
infrastructure change? Having agents investigate these in parallel, sharing what
they find, mirrors how a good incident response team operates.&lt;/p&gt;
&lt;h2&gt;Enabling agent teams&lt;/h2&gt;
&lt;p&gt;Agent teams are disabled by default. To enable them, add this to your
&lt;code&gt;settings.json&lt;/code&gt; (either global at &lt;code&gt;~/.claude/settings.json&lt;/code&gt; or project-level):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;env&amp;quot;: {
    &amp;quot;CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS&amp;quot;: &amp;quot;1&amp;quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&apos;s all the setup needed. Once enabled, you can ask Claude to create a team
from any session.&lt;/p&gt;
&lt;h2&gt;The MCP setup that makes it useful&lt;/h2&gt;
&lt;p&gt;Agent teams pair well with
&lt;a href=&quot;https://modelcontextprotocol.io/&quot;&gt;MCP (Model Context Protocol)&lt;/a&gt; integrations. I
had three configured:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Datadog&lt;/strong&gt; — for querying logs and metrics&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Slack&lt;/strong&gt; — for reading incident channels and coordination threads&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sentry&lt;/strong&gt; — for error tracking and exception details&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With these in place, Claude&apos;s agents query the same observability tools your
team uses during incidents — dashboards, logs, error traces — without you
copy-pasting anything.&lt;/p&gt;
&lt;p&gt;MCP is a protocol that lets AI tools connect to external services. Claude Code
supports it natively, and you configure servers in a &lt;code&gt;.mcp.json&lt;/code&gt; file at the
root of your project or globally.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;A note on data privacy:&lt;/strong&gt; this setup sends production logs, error traces, and
Slack messages to Anthropic&apos;s API. Before doing this at your company, make sure
your usage complies with your organization&apos;s data handling policies. Check
whether your API plan includes
&lt;a href=&quot;https://www.anthropic.com/policies/privacy&quot;&gt;zero data retention&lt;/a&gt;, and consider
whether the telemetry you&apos;re sending contains PII or other sensitive data that
shouldn&apos;t leave your infrastructure.&lt;/p&gt;
&lt;h2&gt;How I kicked it off&lt;/h2&gt;
&lt;p&gt;I opened a Claude Code session in our monorepo and gave it a deliberately
minimal prompt. First:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Check the incident going on and make me a summary:
https://myworkspace.slack.com/archives/C0123ABC456&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Then, after getting the initial summary:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Use a team of agents to help me find the root cause, when doing any
exploration ALWAYS use teammates to avoid filling the context of the main
thread&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That was it. Two messages, no detailed instructions. I wanted to see how much
structure Claude would impose on its own.&lt;/p&gt;
&lt;p&gt;What happened next:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Claude created an orchestrator&lt;/strong&gt; that broke the investigation into areas:
infrastructure metrics, error tracking, recent code changes, and team
communications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;It spawned four specialized agents&lt;/strong&gt;, one for each area. Each agent had a
clear mandate and access to the relevant MCP tools.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The agents started investigating in parallel.&lt;/strong&gt; One queried Datadog for pod
metrics and restart patterns. Another pulled recent exceptions from Sentry. A
third reviewed recent deployments and code changes. The fourth monitored the
Slack incident channel to pick up context from what the human team was
reporting.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The orchestrator&apos;s task list ended up looking something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Task List (incident-investigation)
─────────────────────────────────────────────
#1  ✅  Read Slack incident channel for context       @slack-monitor
#2  ✅  Query Datadog for pod restart patterns         @infra-agent
#3  ✅  Pull recent Sentry exceptions                  @error-tracker
#4  ✅  Review recent deployments and code changes     @code-reviewer
#5  ✅  Cross-reference login failures with pod metrics @infra-agent
#6  ✅  Investigate missing config parameter           @infra-agent
#7  ✅  Synthesize findings into root cause summary    @orchestrator
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Four agents, one root cause&lt;/h2&gt;
&lt;p&gt;The agents explored several hypotheses simultaneously. Some turned out to be
dead ends — a recent dependency upgrade, a configuration key change — but
because agents worked in parallel, they ruled out bad leads fast without
blocking the main thread.&lt;/p&gt;
&lt;p&gt;Cross-agent communication stood out. When the Slack-monitoring agent picked up
that teammates were reporting login failures, it shared that with the
infrastructure agent, which narrowed its search to authentication-related
services. When the code review agent found that a recent change was unrelated to
the failing service (it affected a Node.js backend, but the failing service was
PHP/Apache), it reported back and the team pivoted.&lt;/p&gt;
&lt;p&gt;The agents converged on the root cause, and the orchestrator delivered an
explicit verdict:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Root Cause: Missing Config Parameter → Pod Crash Loop → Database Connection
Leak&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A service needed a configuration parameter during initialization. Without it,
pods crashed on start. A deployment restart turned that into a self-perpetuating
crash loop that exhausted database connections and cascaded into worker
exhaustion, memory limits exceeded, and downstream service degradation.&lt;/p&gt;
&lt;p&gt;The agents pieced the timeline together from Datadog metrics, Sentry exceptions,
and Slack messages, and the orchestrator synthesized it into the cascade chain
above.&lt;/p&gt;
&lt;p&gt;The orchestrator went further. It provided the exact CLI commands to verify the
missing parameter and confirm the diagnosis. It cross-referenced pod logs,
metrics dashboards, and error tracking to establish &lt;em&gt;when&lt;/em&gt; the parameter
disappeared and &lt;em&gt;why&lt;/em&gt; the cascade followed. Once it confirmed the root cause, it
proposed mitigation strategies: which services to restart, in what order, and
what to check after each step to confirm recovery.&lt;/p&gt;
&lt;p&gt;The whole process — from &amp;quot;here&apos;s the Slack channel&amp;quot; to &amp;quot;here&apos;s the root cause
and full cascade chain&amp;quot; — took about 10 minutes of wall clock time. A solo
walkthrough of the same investigation, manually querying Datadog, Sentry, and
cross-referencing Slack messages, would typically take 30–45 minutes.&lt;/p&gt;
&lt;h2&gt;Five takeaways from a real incident&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;You don&apos;t need a perfect prompt.&lt;/strong&gt; I gave Claude almost no instructions, and
it figured out a reasonable investigation structure on its own: splitting work
into logical areas, assigning agents, and coordinating findings.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MCP integrations are the real prerequisite.&lt;/strong&gt; The agents are only as useful as
the data they can access. Without Datadog, Slack, and Sentry connected, they&apos;d
just be guessing. Agent teams parallelize the investigation; MCP makes the
investigation possible.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;It complements human investigation well.&lt;/strong&gt; While the agents dug through logs
and metrics, the rest of the team worked in the incident channel. The agents
picked up context from Slack about what others were finding, and the human team
benefited from the agents&apos; systematic hypothesis elimination.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;It&apos;s token hungry.&lt;/strong&gt; Each agent is a separate Claude Code session with its own
context window, so four parallel agents means roughly 4x the token cost of a
single session. I use a Claude Code subscription with a monthly usage cap, so I
don&apos;t pay per-token, but I&apos;d estimate this investigation consumed the equivalent
of $8–10 in API credits compared to ~$2–3 for a single-session walkthrough.
Worth it when time matters during an active incident, but not something you&apos;d
run for every minor investigation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The orchestrator&apos;s context stays clean.&lt;/strong&gt; Because each agent works in its own
context — processing raw logs, metrics, and API responses — the orchestrator
only receives summarized findings. It reasons about the big picture without its
context window filling up with noise. In a single-session investigation, you&apos;d
hit context limits fast when querying multiple observability tools.&lt;/p&gt;
&lt;h2&gt;Best for multi-hypothesis problems&lt;/h2&gt;
&lt;p&gt;Agent teams shine when a problem has multiple possible causes you can
investigate independently. Production incidents are a natural fit, but the
pattern applies to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Performance investigations&lt;/strong&gt; — one agent profiling the database, another
checking application metrics, another reviewing recent changes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security incident response&lt;/strong&gt; — parallel analysis of access logs, code
changes, and network traffic&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Complex debugging&lt;/strong&gt; — when you&apos;re not sure which layer of the stack is
responsible&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Break the problem into independent investigation areas, let agents explore them
in parallel, and have the orchestrator synthesize the findings.&lt;/p&gt;
&lt;p&gt;For simpler issues where the cause is likely in one place, a regular Claude Code
session (or even a subagent) is more cost-effective. Agent teams add value when
parallel exploration saves time.&lt;/p&gt;
&lt;h2&gt;Getting started&lt;/h2&gt;
&lt;p&gt;If you want to try this yourself:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Enable agent teams in your &lt;code&gt;settings.json&lt;/code&gt; (the config snippet above)&lt;/li&gt;
&lt;li&gt;Set up MCP integrations for the observability tools your team uses — this is
the important part&lt;/li&gt;
&lt;li&gt;Open a Claude Code session in your project and describe the problem, asking
Claude to use a team of agents&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The &lt;a href=&quot;https://code.claude.com/docs/en/agent-teams&quot;&gt;official documentation&lt;/a&gt; covers
the full feature set, including display modes (in-process vs. split panes with
tmux), how to interact with individual teammates, and how to control the team
size.&lt;/p&gt;
&lt;p&gt;Start with a low-stakes investigation to get a feel for how the coordination
works before relying on it during a real incident. And if you already have MCP
servers configured for your observability stack, you&apos;re most of the way there.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Moving Shell Secrets from .zshrc to 1Password CLI]]></title>
            <description><![CDATA[API keys sitting in plaintext dotfiles are one accidental push away from leaking. 1Password CLI can load them at shell startup with biometric unlock — no more secrets in git history, no more rotating tokens after a bad commit.]]></description>
            <link>https://magarcia.io/stop-hardcoding-secrets-in-your-zshrc/</link>
            <guid isPermaLink="false">https://magarcia.io/stop-hardcoding-secrets-in-your-zshrc/</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[cli]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Fri, 20 Feb 2026 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;I keep my shell config in a dotfiles repo so I can track changes in git. But a
few API keys needed to be available at shell startup, and hardcoding them in
&lt;code&gt;.zshrc&lt;/code&gt; wasn&apos;t an option. So I loaded them from a separate &lt;code&gt;.env&lt;/code&gt; file — one
that stayed out of the repo.&lt;/p&gt;
&lt;p&gt;The secrets were still plaintext, just in a different file. One wrong &lt;code&gt;git add&lt;/code&gt;
and they&apos;d be in history. Then I thought: &lt;em&gt;1Password could handle this.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I moved the keys there and now load them at shell startup with &lt;code&gt;op inject&lt;/code&gt; — one
call, biometric unlock, no plaintext files anywhere.&lt;/p&gt;
&lt;h2&gt;The problem with &lt;code&gt;.env&lt;/code&gt; and &lt;code&gt;.zshrc&lt;/code&gt; secrets&lt;/h2&gt;
&lt;p&gt;Most developers store secrets one of two ways: a &lt;code&gt;.env&lt;/code&gt; file or &lt;code&gt;export&lt;/code&gt;
statements scattered across shell configs. Either way, the values are plaintext
and end up in backups, dotfiles repos, or shell history.&lt;/p&gt;
&lt;p&gt;I wrote about
&lt;a href=&quot;https://magarcia.io/cross-platform-secret-storage-with-cross-keychain/&quot;&gt;secure credential storage for Node.js with &lt;strong&gt;cross-keychain&lt;/strong&gt;&lt;/a&gt;.
Shell environment variables are a different problem — they load before any
application runs, and every process in your terminal needs access to them.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1Password CLI&lt;/strong&gt; (&lt;code&gt;op&lt;/code&gt;) solves this at the shell level.&lt;/p&gt;
&lt;h2&gt;Create a vault and enable biometric unlock&lt;/h2&gt;
&lt;p&gt;Install the
&lt;a href=&quot;https://developer.1password.com/docs/cli/get-started/&quot;&gt;1Password CLI&lt;/a&gt; first,
then set up two things before touching your shell config.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;A vault for the secrets.&lt;/strong&gt; I created a vault called &lt;code&gt;development&lt;/code&gt; with
multiple entries for each service. Each secret is the credentials field for that
service. This gives you clean &lt;code&gt;op://development/&amp;lt;SERVICE_NAME&amp;gt;/credentials&lt;/code&gt;
URIs. You can organize it however you like, but I recommend a consistent
structure for easy reference.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Biometric unlock.&lt;/strong&gt; Without this, &lt;code&gt;op&lt;/code&gt; prompts for your master password every
time you open a terminal. Enable it in the 1Password desktop app under &lt;em&gt;Settings
→ Developer → &amp;quot;Integrate with 1Password CLI&amp;quot;&lt;/em&gt;. After that, &lt;code&gt;op&lt;/code&gt; authenticates
via Touch ID or your system keychain.&lt;/p&gt;
&lt;h2&gt;Split your shell into two files&lt;/h2&gt;
&lt;p&gt;With the vault in place, replace your &lt;code&gt;.env&lt;/code&gt; file with two shell files. One for
non-secret configuration that zsh sources automatically. One for secrets that
&lt;code&gt;op inject&lt;/code&gt; processes before loading.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;.zshenv&lt;/code&gt; — plain configuration, no secrets&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export EDITOR=&amp;quot;nvim&amp;quot;
export LANG=&amp;quot;en_US.UTF-8&amp;quot;
export PATH=&amp;quot;$HOME/.local/bin:$PATH&amp;quot;
export NODE_ENV=&amp;quot;development&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Zsh sources this file automatically on every shell invocation — interactive,
non-interactive, scripts, SSH commands. No secrets here, ever.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;.zshsecrets&lt;/code&gt; — secret references, not values&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export GITHUB_TOKEN=&amp;quot;{{ op://development/github/credentials }}&amp;quot;
export ANTHROPIC_API_KEY=&amp;quot;{{ op://development/anthropic/credentials }}&amp;quot;
export NPM_TOKEN=&amp;quot;{{ op://development/npm/credentials }}&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s safe to commit to your dotfiles repo. Anyone reading it sees &lt;code&gt;op://&lt;/code&gt; URIs,
not credentials.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;.zshrc&lt;/code&gt; — wire it together&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;if command -v op &amp;amp;&amp;gt;/dev/null; then
  eval &amp;quot;$(op inject -i ~/.zshsecrets 2&amp;gt;/dev/null)&amp;quot; || echo &amp;quot;⚠ op: secrets not loaded&amp;quot;
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;op inject&lt;/code&gt; reads the template, resolves every &lt;code&gt;{{ op://... }}&lt;/code&gt; reference
against your 1Password vault, and outputs the result with real values. &lt;code&gt;eval&lt;/code&gt;
executes the exports. If &lt;code&gt;op&lt;/code&gt; isn&apos;t authenticated or unavailable, the shell
still starts — you just won&apos;t have secrets loaded until you run &lt;code&gt;op signin&lt;/code&gt; and
re-source.&lt;/p&gt;
&lt;h2&gt;Why &lt;a href=&quot;https://developer.1password.com/docs/cli/reference/commands/inject&quot;&gt;&lt;code&gt;op inject&lt;/code&gt;&lt;/a&gt; over &lt;a href=&quot;https://developer.1password.com/docs/cli/reference/commands/read&quot;&gt;&lt;code&gt;op read&lt;/code&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Two ways to pull secrets from &lt;code&gt;op&lt;/code&gt;. Per-variable reads:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export GITHUB_TOKEN=&amp;quot;$(op read &apos;op://development/github/credentials&apos; 2&amp;gt;/dev/null)&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or template injection, which I use. The difference is performance. Each
&lt;code&gt;op read&lt;/code&gt; spawns a subprocess and makes an API call. With 5+ secrets, shell
startup grows by 1–2 seconds. &lt;code&gt;op inject&lt;/code&gt; resolves everything in a single call —
around 200–400ms with biometric unlock enabled.&lt;/p&gt;
&lt;h2&gt;The cleanup you can&apos;t skip&lt;/h2&gt;
&lt;p&gt;After migrating, I did three things:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Removed every hardcoded secret from my shell files.&lt;/strong&gt; A quick grep finds
stragglers:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;grep -rn &apos;export.*KEY\|export.*TOKEN\|export.*SECRET&apos; ~/.zsh*
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Scrubbed my dotfiles git history.&lt;/strong&gt; If you ever committed secrets — even if
you deleted them later — they&apos;re still in the history.
&lt;a href=&quot;https://github.com/newren/git-filter-repo&quot;&gt;&lt;strong&gt;git filter-repo&lt;/strong&gt;&lt;/a&gt; or
&lt;a href=&quot;https://rtyley.github.io/bfg-repo-cleaner/&quot;&gt;&lt;strong&gt;BFG Repo-Cleaner&lt;/strong&gt;&lt;/a&gt; can purge
them.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Rotated every token that had been in plaintext.&lt;/strong&gt; Even if your secrets never
touched a git repo, they may have lived in Time Machine backups or shell
history. If you&apos;re not certain of your full exposure, rotate. It&apos;s not optional.&lt;/p&gt;
&lt;h2&gt;Tradeoffs&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Startup latency.&lt;/strong&gt; The &lt;code&gt;op inject&lt;/code&gt; call adds 200–400ms to shell startup.
Without biometric unlock, &lt;code&gt;op&lt;/code&gt; prompts for your master password — painful if you
open terminals frequently. Even with it enabled, the Touch ID prompt interrupts
flow when opening terminals in quick succession.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Authentication gaps.&lt;/strong&gt; If &lt;code&gt;op&lt;/code&gt; isn&apos;t authenticated, your secrets won&apos;t load.
You&apos;ll notice when a command fails, then run &lt;code&gt;op signin&lt;/code&gt; and &lt;code&gt;source ~/.zshrc&lt;/code&gt;.
It&apos;s minor friction and a reasonable trade for no plaintext secrets.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;No remote support.&lt;/strong&gt; SSH sessions to remote machines won&apos;t have &lt;code&gt;op&lt;/code&gt;. If you
need secrets there, you&apos;ll need a different mechanism — &lt;code&gt;op&lt;/code&gt; service accounts,
or forwarding specific variables through SSH config.&lt;/p&gt;
&lt;h2&gt;A dotfiles repo you can make public&lt;/h2&gt;
&lt;p&gt;My dotfiles repo is now public-safe. Every secret lives in 1Password, a template
file with &lt;code&gt;op://&lt;/code&gt; URIs loads them at startup, and Touch ID handles the rest.&lt;/p&gt;
&lt;p&gt;If you need secrets on remote machines, the
&lt;a href=&quot;https://developer.1password.com/docs/cli/&quot;&gt;1Password CLI docs&lt;/a&gt; cover service
accounts and SSH agent forwarding.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Your Test Output Is Burning Tokens: Taming Verbose Reporters for AI Agents]]></title>
            <description><![CDATA[Default test reporters print 200+ "PASS" lines nobody reads. In CI and AI agents, that output is pure waste. Detect the environment, pick a minimal reporter, and reclaim your context window.]]></description>
            <link>https://magarcia.io/your-test-output-is-burning-tokens/</link>
            <guid isPermaLink="false">https://magarcia.io/your-test-output-is-burning-tokens/</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[performance]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Mon, 09 Feb 2026 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;Test runners like &lt;a href=&quot;https://jestjs.io/&quot;&gt;&lt;strong&gt;Jest&lt;/strong&gt;&lt;/a&gt; and
&lt;a href=&quot;https://vitest.dev/&quot;&gt;&lt;strong&gt;Vitest&lt;/strong&gt;&lt;/a&gt; ship with reporters designed for humans
watching terminals. Every file gets a line:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;PASS src/components/Button.test.tsx
PASS src/components/Card.test.tsx
PASS src/components/Dialog.test.tsx
PASS src/components/Dropdown.test.tsx
PASS src/utils/format.test.ts
PASS src/utils/date.test.ts
... (200 more lines)
FAIL src/components/Nav.test.tsx
  ● Nav &amp;gt; renders active state

    Expected: &amp;quot;active&amp;quot;
    Received: &amp;quot;inactive&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For a developer at their terminal, scrolling green text reassures. But tests now
run in two other contexts where that output costs more than it helps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;CI&lt;/strong&gt; — nobody reads the log unless something fails. A red build forces you
to scroll past hundreds of &amp;quot;PASS&amp;quot; lines to find the failure.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI agents&lt;/strong&gt; read every line of that output. Each &amp;quot;PASS&amp;quot; line consumes
tokens, filling the context window with successful test output instead of the
actual problem.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We hit this at &lt;a href=&quot;https://buffer.com&quot;&gt;&lt;strong&gt;Buffer&lt;/strong&gt;&lt;/a&gt;. A 215-suite test run produced
~3,500 tokens of output, almost all &amp;quot;PASS&amp;quot; lines. Our AI agent spent more tokens
reading test results than writing code. We tried adding &lt;code&gt;--reporter=dot&lt;/code&gt; to our
&lt;code&gt;CLAUDE.md&lt;/code&gt; instructions, but the agent didn&apos;t always use it. The flag was a
suggestion; we needed a guarantee.&lt;/p&gt;
&lt;h2&gt;Detect the Environment, Choose the Reporter&lt;/h2&gt;
&lt;p&gt;The fix: detect the environment in your test config and switch reporters
automatically. No agent instructions required, no flags to remember.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.anthropic.com/en/docs/claude-code&quot;&gt;Claude Code&lt;/a&gt; sets
&lt;a href=&quot;https://github.com/anthropics/claude-code/issues/531&quot;&gt;&lt;code&gt;CLAUDECODE=1&lt;/code&gt;&lt;/a&gt; in every
shell it spawns. CI providers — GitHub Actions, GitLab CI, CircleCI, Travis CI,
and Jenkins — all set
&lt;a href=&quot;https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables#default-environment-variables&quot;&gt;&lt;code&gt;CI=true&lt;/code&gt;&lt;/a&gt;.
Your config reads these variables and picks the right reporter — deterministic
regardless of how the agent invokes the test command.&lt;/p&gt;
&lt;p&gt;Here&apos;s what we shipped at Buffer. CI and Claude Code each get their own reporter
configuration; local development keeps the default.&lt;/p&gt;
&lt;p&gt;For &lt;strong&gt;Jest&lt;/strong&gt;, add the logic to &lt;code&gt;jest.config.ts&lt;/code&gt;. The
&lt;a href=&quot;https://jestjs.io/docs/configuration#reporters-arraymodulename--modulename-options&quot;&gt;&lt;code&gt;summary&lt;/code&gt; reporter&lt;/a&gt;
prints a final count plus full details for any failures, with no per-file
output. Jest&apos;s default &lt;code&gt;summaryThreshold&lt;/code&gt; is 20, meaning it only prints failure
details when more than 20 tests fail. Set it to &lt;code&gt;0&lt;/code&gt; so every failure prints in
full. In CI, you can pair it with a custom reporter that collects failures for
GitHub PR comments:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;const isCI = process.env.CI === &amp;quot;true&amp;quot;;
const isClaude = process.env.CLAUDECODE === &amp;quot;1&amp;quot;;

function getReporters() {
  if (isCI) {
    return [[&amp;quot;summary&amp;quot;, { summaryThreshold: 0 }], &amp;quot;jest-ci-reporter&amp;quot;];
  }
  if (isClaude) {
    return [[&amp;quot;summary&amp;quot;, { summaryThreshold: 0 }]];
  }
  return [&amp;quot;default&amp;quot;];
}

export default {
  reporters: getReporters(),
  // ... rest of your config
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For &lt;strong&gt;Vitest&lt;/strong&gt;, add it to &lt;code&gt;vitest.config.ts&lt;/code&gt;. The
&lt;a href=&quot;https://vitest.dev/guide/reporters#dot-reporter&quot;&gt;&lt;code&gt;dot&lt;/code&gt; reporter&lt;/a&gt; compresses
each file to a single character — a dot for pass, an &lt;code&gt;x&lt;/code&gt; for fail:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;import { defineConfig } from &amp;quot;vitest/config&amp;quot;;

const isCI = process.env.CI === &amp;quot;true&amp;quot;;
const isClaude = process.env.CLAUDECODE === &amp;quot;1&amp;quot;;

function getReporters() {
  if (isCI) {
    return [&amp;quot;dot&amp;quot;, &amp;quot;ci-reporter&amp;quot;];
  }
  if (isClaude) {
    return [&amp;quot;dot&amp;quot;];
  }
  return [&amp;quot;default&amp;quot;];
}

export default defineConfig({
  test: {
    reporters: getReporters(),
    // ... rest of your config
  },
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Both frameworks also accept reporter flags on the command line
(&lt;a href=&quot;https://jestjs.io/docs/cli#--reporters&quot;&gt;&lt;code&gt;--reporters&lt;/code&gt;&lt;/a&gt; for Jest,
&lt;a href=&quot;https://vitest.dev/guide/cli#reporter&quot;&gt;&lt;code&gt;--reporter&lt;/code&gt;&lt;/a&gt; for Vitest). But relying
on an AI agent to pass the right flag is probabilistic — the agent may forget or
run a different test script that omits it. Environment variables make it
deterministic.&lt;/p&gt;
&lt;p&gt;The resulting matrix:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Environment&lt;/th&gt;
&lt;th&gt;Jest Reporter&lt;/th&gt;
&lt;th&gt;Vitest Reporter&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Local dev&lt;/td&gt;
&lt;td&gt;&lt;code&gt;default&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;default&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;summary&lt;/code&gt; + CI reporter&lt;/td&gt;
&lt;td&gt;&lt;code&gt;dot&lt;/code&gt; + CI reporter&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;&lt;code&gt;summary&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;dot&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;What the Agent Sees&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Before&lt;/strong&gt; (default reporter, ~250 lines):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;PASS src/components/Button.test.tsx (3 suites, 12 tests)
PASS src/components/Card.test.tsx (2 suites, 8 tests)
... (200+ more PASS lines)
FAIL src/components/Nav.test.tsx
  ● Nav &amp;gt; renders active state
    expect(received).toBe(expected)
    Expected: &amp;quot;active&amp;quot;
    Received: &amp;quot;inactive&amp;quot;

Test Suites: 1 failed, 214 passed, 215 total
Tests:       1 failed, 847 passed, 848 total
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;After&lt;/strong&gt; (summary reporter, ~10 lines):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;FAIL src/components/Nav.test.tsx
  ● Nav &amp;gt; renders active state
    expect(received).toBe(expected)
    Expected: &amp;quot;active&amp;quot;
    Received: &amp;quot;inactive&amp;quot;

Test Suites: 1 failed, 214 passed, 215 total
Tests:       1 failed, 847 passed, 848 total
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Same failure details, 96% less output.&lt;/p&gt;
&lt;p&gt;When all tests pass, the gap widens further. The default reporter prints every
file name — 215 lines. The summary reporter prints two:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Test Suites: 215 passed, 215 total
Tests:       848 passed, 848 total
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Trade-offs&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;You lose progress feedback.&lt;/strong&gt; The summary reporter stays silent until the
suite finishes. For long-running suites, the agent sees nothing until
completion. In practice this has not mattered — AI agents do not need
reassurance that the process is running.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Debugging intermittent failures gets harder.&lt;/strong&gt; The default reporter&apos;s per-file
timing helps identify slow or flaky tests. Use the &lt;code&gt;verbose&lt;/code&gt; reporter when
investigating flakiness.&lt;/p&gt;
&lt;h2&gt;The Same Fix Applies to Linters and Build Logs&lt;/h2&gt;
&lt;p&gt;Test reporters are one interface between your tools and whatever reads the
output. Linters and type checkers have the same problem. Anywhere a tool
produces verbose output that an AI agent consumes, you can detect the
environment and switch to a compact format. Check your test config — if
&lt;code&gt;process.env.CI&lt;/code&gt; is set and you&apos;re still using the default reporter, you&apos;re
paying for output nobody reads.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[AI-Native Development: When Building Is Faster Than Planning]]></title>
            <description><![CDATA[AI coding tools like Claude Code have inverted software economics. When a prototype takes hours instead of weeks, coordination becomes the bottleneck. How AI-native companies ship faster by building first and planning later.]]></description>
            <link>https://magarcia.io/when-ai-made-building-cheaper-than-the-meetings-to-plan-it/</link>
            <guid isPermaLink="false">https://magarcia.io/when-ai-made-building-cheaper-than-the-meetings-to-plan-it/</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[typescript]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sat, 24 Jan 2026 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;&lt;strong&gt;AI coding tools have inverted software economics.&lt;/strong&gt; When a working prototype
takes hours instead of weeks, the meetings to plan that prototype cost more than
building it. This shift is creating a new category of AI-native companies that
ship faster by building first and planning from working software.&lt;/p&gt;
&lt;p&gt;We debated a feature for weeks. Then someone built it in a day with AI. When
execution costs less than coordination, meetings become the bottleneck. This is
reshaping how software teams work.&lt;/p&gt;
&lt;h2&gt;Prototyping is Now Fast and Cheap&lt;/h2&gt;
&lt;p&gt;We can build things faster and cheaper than ever. The second-order effects are
profound. When execution is cheap, the entire apparatus we built around
expensive execution—planning meetings, design reviews, sprint ceremonies,
estimation rituals—starts to look like overhead.&lt;/p&gt;
&lt;p&gt;A real example from Buffer: We had a project that got deprioritized. The feature
had clear value. The backend logic already existed in a legacy service, but it
needed to be migrated to our new systems—new APIs, new frontend, new patterns.
The project kept slipping down the backlog because the estimated effort was
significant: backend design proposals, architecture reviews, frontend designs,
multiple rounds of feedback. We spent more time discussing &lt;em&gt;whether&lt;/em&gt; to build it
than it would have taken to just build it.&lt;/p&gt;
&lt;p&gt;Then one of my coworkers decided to just do it. With
&lt;a href=&quot;https://www.anthropic.com/claude-code&quot;&gt;Claude Code&lt;/a&gt; and a few iterations, he
had a working prototype of the entire project in less than a day.&lt;/p&gt;
&lt;p&gt;The prototype wasn&apos;t production-ready—it needed cleanup and proper tests—but it
followed our patterns because we&apos;ve taught our AI tools our codebase
conventions. Refactoring working code that follows your architecture is far
easier than rewriting something built on foreign patterns.&lt;/p&gt;
&lt;p&gt;The math feels surreal. We spent days in meetings, drafting specifications,
debating approaches—all to conclude &amp;quot;not now, too expensive.&amp;quot; Meanwhile, the
actual implementation took hours.&lt;/p&gt;
&lt;h2&gt;The Coordination Cost Now Exceeds the Execution Cost&lt;/h2&gt;
&lt;p&gt;We&apos;ve crossed a threshold where coordination often costs more than execution.
The meeting to discuss a feature—aligning stakeholders, gathering requirements,
getting approval—can take longer than implementing that feature with AI
assistance.&lt;/p&gt;
&lt;p&gt;This inverts decades of software economics. Planning meetings existed because
&lt;em&gt;measure twice, cut once&lt;/em&gt; made sense when cutting was expensive. When cutting is
cheap, measure once, cut, look at the result, cut again, and let the iteration
loop serve as the planning process.&lt;/p&gt;
&lt;h2&gt;Build First, Then React&lt;/h2&gt;
&lt;p&gt;The old workflow: Spec → Approval → Build → Demo → Feedback → Iterate.&lt;/p&gt;
&lt;p&gt;The emerging workflow is: Build → Demo → Feedback → Iterate. The spec emerges
from the iterations.&lt;/p&gt;
&lt;p&gt;This works because people struggle to articulate abstract wants. &amp;quot;What should
the dashboard show?&amp;quot; produces vague answers. But &amp;quot;Is this dashboard useful?&amp;quot;
produces specific, actionable feedback. Show someone a working prototype and ask
&amp;quot;What&apos;s wrong with this?&amp;quot;—you&apos;ll get better requirements in three iterations
than in ten hours of requirements meetings.&lt;/p&gt;
&lt;p&gt;The prototype becomes the functional specification. You don&apos;t write a document
describing what the software should do; you build software that does something
and refine from there. The intent and constraints still get documented—but the
mechanics are defined by working code.&lt;/p&gt;
&lt;h2&gt;Product and Design Need to Adapt Too&lt;/h2&gt;
&lt;p&gt;This shift isn&apos;t just about engineering. Product and design processes were also
built around the assumption that implementation is expensive. When building was
slow, it made sense to invest heavily in wireframes, mockups, and PRDs—proxies
for the real thing that were cheaper to iterate on than actual software.&lt;/p&gt;
&lt;p&gt;That calculus has changed. For interaction-heavy features—dashboards, forms,
workflows—high-fidelity mockups often take longer than building the real thing.
But this doesn&apos;t mean skipping design thinking entirely. A quick sketch or
wireframe combined with rapid AI prototyping lets you explore possibilities
faster than either approach alone.&lt;/p&gt;
&lt;p&gt;Design isn&apos;t obsolete. Designers and product managers become critics and
directors—reacting to working prototypes and guiding them toward good solutions
rather than authoring specifications upfront.&lt;/p&gt;
&lt;h2&gt;What AI-Native Companies Look Like&lt;/h2&gt;
&lt;p&gt;Companies that embrace this shift don&apos;t just use AI—they become AI-native. Their
processes, timelines, and expectations restructure around what&apos;s now possible.&lt;/p&gt;
&lt;p&gt;Anthropic itself is the clearest example. Claude Code launched in February 2025
and became generally available that May. Six months later, it hit $1 billion in
annualized revenue. But what&apos;s more striking is how they built on that momentum.&lt;/p&gt;
&lt;p&gt;In January 2026, Anthropic shipped
&lt;a href=&quot;https://www.anthropic.com/news/introducing-anthropic-labs&quot;&gt;Cowork&lt;/a&gt;—a desktop
agent that brings Claude Code&apos;s power to non-technical users. A team of four
engineers built the entire product in roughly ten days, using Claude Code
itself. The AI coding tool built its own non-technical sibling.&lt;/p&gt;
&lt;p&gt;In a traditional company, a product like Cowork would take months of
requirements gathering, design reviews, architecture proposals, and stakeholder
alignment. Anthropic skipped all of that. They noticed users were forcing Claude
Code to do non-coding tasks—vacation research, slide decks, email cleanup—and
instead of debating whether to build a solution, they just built one.&lt;/p&gt;
&lt;p&gt;But Anthropic builds AI. What about companies that just &lt;em&gt;use&lt;/em&gt; it?&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://sentry.io&quot;&gt;Sentry&lt;/a&gt; built their
&lt;a href=&quot;https://docs.sentry.io/product/sentry-mcp/&quot;&gt;MCP server&lt;/a&gt; using Claude Code. The
result is excellent—good enough to make me reconsider our own error tracking
setup. When your integration is better because AI helped your engineers build it
faster and iterate more, that&apos;s the competitive advantage in action.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://lovable.dev&quot;&gt;Lovable&lt;/a&gt; reached $100 million in annual revenue eight
months after launch. They hit $200 million ARR just four months later. With a
team of around 45 people. The entire platform is powered by Claude, and when
Claude 4 launched, their CEO posted that it &amp;quot;erased most of Lovable&apos;s errors.&amp;quot;
They&apos;re not building AI—they&apos;re building on it, and shipping at a pace that
would have been impossible with traditional development cycles.&lt;/p&gt;
&lt;p&gt;This is the competitive advantage of AI-native companies: they validate ideas
with working software instead of slide decks, and iterate in days instead of
quarters. The companies that figure this out will outpace those still running
traditional planning cycles.&lt;/p&gt;
&lt;h2&gt;Build Fast, But Own What You Ship&lt;/h2&gt;
&lt;p&gt;There&apos;s a risk here worth naming: just because something is cheap to build
doesn&apos;t mean it should exist. An expanding backlog isn&apos;t good—it can lead to
feature bloat and unfocused products. The new discipline: &amp;quot;should this exist at
all?&amp;quot;&lt;/p&gt;
&lt;p&gt;This is the paradox of cheap execution: &lt;em&gt;because&lt;/em&gt; we can build faster, we need
sharper judgment about &lt;em&gt;what&lt;/em&gt; to build. The old constraint—&amp;quot;this would take too
long&amp;quot;—forced prioritization. Without it, we can efficiently build the wrong
things. Build-first thinking requires more product discipline, not less.&lt;/p&gt;
&lt;p&gt;Another discipline becomes more important: code review.&lt;/p&gt;
&lt;h2&gt;Vibe Coding Doesn&apos;t Belong in Production&lt;/h2&gt;
&lt;p&gt;&amp;quot;Vibe coding&amp;quot;—shipping AI-generated code you don&apos;t understand—is tempting when
building is this fast. The prototype works, the tests pass, why not just merge
it? Because someone has to be responsible when it breaks at 3am.&lt;/p&gt;
&lt;p&gt;Every change to production should be reviewed by someone who will be on-call for
that system. This wisdom predates AI. The difference is that AI makes it easier
to produce code that works without understanding why it works. That&apos;s fine for
exploration. It&apos;s dangerous for production.&lt;/p&gt;
&lt;p&gt;This is about operational accountability, not gatekeeping. Non-engineers can
absolutely use AI to build useful prototypes. But the constraint for production
isn&apos;t technical ability—it&apos;s operational accountability. If you won&apos;t be paged
when the system fails, your changes need review from someone who will be. The
reviewer isn&apos;t blocking your contribution; they&apos;re accepting responsibility for
it.&lt;/p&gt;
&lt;p&gt;Engineers bring knowledge that AI doesn&apos;t replace: operational experience and
the accumulated wisdom of having been paged at 3am for problems that looked fine
in testing. AI makes engineers faster; it doesn&apos;t make them unnecessary.&lt;/p&gt;
&lt;p&gt;This creates a natural quality gate that scales with AI acceleration. Build as
fast as you want, prototype freely, but the path to production runs through
someone who owns the consequences, and that ownership separates demos from
deployments.&lt;/p&gt;
&lt;h2&gt;The Job Changes, But It Doesn&apos;t Disappear&lt;/h2&gt;
&lt;p&gt;What&apos;s shifting is the nature of the work. Less time writing boilerplate. More
time understanding problems, directing AI, reviewing output, and making judgment
calls. The developer becomes a &lt;em&gt;code director&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;This requires different skills. Clarity becomes primary—you need to articulate
precise intent because AI executes instantly and literally. Vague input produces
immediate, concrete wrong output. Debugging AI-generated code is a distinct
skill: you&apos;re evaluating someone else&apos;s implementation choices, not reasoning
through your own intentions.&lt;/p&gt;
&lt;p&gt;Review becomes more important, not less. When anyone can generate plausible
code, the ability to evaluate that code—to spot subtle bugs, understand
performance implications, anticipate edge cases—is what separates working
software from time bombs. The engineers who thrive will be those who can direct
AI effectively and then critically assess what it produces.&lt;/p&gt;
&lt;p&gt;The job has always been solving problems with software. AI strips away the
pretense that it was about typing.&lt;/p&gt;
&lt;p&gt;&amp;lt;br /&amp;gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&amp;lt;br /&amp;gt;&lt;/p&gt;
&lt;p&gt;We&apos;re early in this shift. The tools are evolving rapidly, and we&apos;re all
figuring out new workflows in real time. But the direction is clear: execution
is cheap, coordination is expensive, and building is the new way to think.&lt;/p&gt;
&lt;p&gt;The question is how fast you can adapt while still owning what you ship.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;More on AI-assisted development: Learn
&lt;a href=&quot;https://magarcia.io/asking-ai-to-build-the-tool-instead-of-doing-the-task/&quot;&gt;techniques for building tools with AI&lt;/a&gt;,
see how AI helped design the
&lt;a href=&quot;https://magarcia.io/air-gapped-webrtc-breaking-the-qr-limit/&quot;&gt;QWBP serverless WebRTC protocol&lt;/a&gt;, or
explore
&lt;a href=&quot;https://magarcia.io/writing-powerful-claude-code-skills-with-npx-bun/&quot;&gt;writing Claude Code skills with Bun&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Breaking the QR Limit: The Discovery of a Serverless WebRTC Protocol]]></title>
            <description><![CDATA[How I compressed WebRTC signaling from 2,500 bytes to 55 bytes using a custom binary protocol, enabling peer-to-peer connections through QR codes without any signaling server. The complete story of QWBP (QR-WebRTC Bootstrap Protocol).]]></description>
            <link>https://magarcia.io/air-gapped-webrtc-breaking-the-qr-limit/</link>
            <guid isPermaLink="false">https://magarcia.io/air-gapped-webrtc-breaking-the-qr-limit/</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[webrtc]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Mon, 19 Jan 2026 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;&lt;strong&gt;QWBP (QR-WebRTC Bootstrap Protocol)&lt;/strong&gt; enables serverless peer-to-peer
connections by compressing WebRTC signaling into QR codes. By designing a custom
binary protocol that reduces SDP from 2,500 bytes to just 55 bytes, two devices
can establish encrypted WebRTC connections by scanning each other&apos;s QR
codes—with no signaling server required.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The reasonable man adapts himself to the world: the unreasonable one persists
in trying to adapt the world to himself. Therefore all progress depends on the
unreasonable man.&lt;/p&gt;
&lt;p&gt;— George Bernard Shaw&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I hardcoded passwords into production. I violated WebRTC best practices. I
designed a custom binary protocol. Then I threw it all away when I discovered
the real problem wasn&apos;t compression—it was physics.&lt;/p&gt;
&lt;p&gt;This is the story of a Thursday evening, a Friday morning, and an unreasonable
protocol that shouldn&apos;t exist.&lt;/p&gt;
&lt;h2&gt;The User Request I Couldn&apos;t Answer&lt;/h2&gt;
&lt;p&gt;January 2025. &lt;a href=&quot;https://palabreja.com&quot;&gt;&lt;strong&gt;Palabreja&lt;/strong&gt;&lt;/a&gt;, my daily Spanish word
game, had grown over 30K monthly active players across Spain and Latin America.
Built as a static Progressive Web App with zero backend—no database, no user
accounts—everything lived in &lt;code&gt;localStorage&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Then came the Bluesky notification:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;I&apos;m buying a new phone. How can I keep my progress?&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Most developers answer: &amp;quot;Log into your account.&amp;quot; I had no accounts, no server,
no answer.&lt;/p&gt;
&lt;p&gt;&amp;quot;Currently, there is no way.&amp;quot;&lt;/p&gt;
&lt;p&gt;That response gnawed at me. Players who upgraded phones would lose 2+ years of
game progress. Statistics carefully maintained over months would vanish.&lt;/p&gt;
&lt;p&gt;I refused to spin up a database to move a few kilobytes of JSON between two
devices sitting next to each other. I wanted &lt;em&gt;direct device-to-device transfer
with zero servers&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;Thursday Evening: The &amp;quot;Serverless&amp;quot; Lie&lt;/h2&gt;
&lt;p&gt;After work, I opened my laptop. WebRTC seemed perfect — peer-to-peer
connections, browser-native APIs, no relay servers.&lt;/p&gt;
&lt;p&gt;Every tutorial showed the same pattern:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const peer = new RTCPeerConnection();
const offer = await peer.createOffer();
await peer.setLocalDescription(offer);

// Send offer to other peer via... WebSocket server?
socket.send(JSON.stringify(offer));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There it was. &lt;strong&gt;Signaling requires a server.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Before two browsers connect peer-to-peer, they exchange Session Description
Protocol (SDP) messages—offers and answers containing network information and
encryption parameters. The WebRTC spec leaves signaling unspecified, assuming
you&apos;ll use WebSockets, HTTP POST, or another server-mediated channel.&lt;/p&gt;
&lt;p&gt;I had no server. I wanted no server.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;QR codes.&lt;/strong&gt; Display the offer as a QR code, scan it with the other phone,
display the answer as another QR code, scan that. No server. Air-gapped
communication using screens and cameras.&lt;/p&gt;
&lt;p&gt;I built a prototype. The QR code appeared.&lt;/p&gt;
&lt;p&gt;It was &lt;strong&gt;massive&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;A Version 30+ QR code—over 130 modules per side—filled my phone screen. Dense,
chaotic, unreadable.&lt;/p&gt;
&lt;p&gt;Scanning results:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Good lighting, steady hands:&lt;/strong&gt; 8 seconds, 60% success&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dim room:&lt;/strong&gt; 15+ seconds, failed most attempts&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scratched lens:&lt;/strong&gt; Never succeeded&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;My &amp;quot;instant sync&amp;quot; took longer than typing the data manually.&lt;/p&gt;
&lt;p&gt;I printed the SDP to understand what I was fighting:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;v=0
o=- 4682389562847392847 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE 0
a=extmap-allow-mixed
a=msid-semantic: WMS
m=application 9 UDP/DTLS/SCTP webrtc-datachannel
c=IN IP4 0.0.0.0
a=ice-ufrag:eP8j
a=ice-pwd:3K9m...
a=ice-options:trickle
a=fingerprint:sha-256 E7:3B:38:46:1A:5D:88:B0:...
a=setup:actpass
a=mid:0
a=sctp-port:5000
a=max-message-size:262144
a=candidate:1 1 udp 2122260223 192.168.1.100 54321 typ host
... (20 more candidate lines)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2,487 bytes.&lt;/strong&gt; Session Description Protocol[^1] dates to 1998, designed for
VoIP where endpoints negotiate video codecs, audio sampling rates, bandwidth
constraints. I controlled both endpoints. 90% of this data was ceremony for a
negotiation that would never happen.&lt;/p&gt;
&lt;p&gt;[^1]:
RFC 8866 - SDP: Session Description Protocol,
https://datatracker.ietf.org/doc/html/rfc8866&lt;/p&gt;
&lt;p&gt;The question became: &amp;quot;Which SDP fields are actually &lt;em&gt;required&lt;/em&gt;?&amp;quot;&lt;/p&gt;
&lt;h2&gt;The Path I Didn&apos;t Take&lt;/h2&gt;
&lt;p&gt;Prior work exists: &lt;strong&gt;animated QR sequences&lt;/strong&gt; that flash frames until the scanner
captures all parts[^2][^3], and &lt;strong&gt;fountain codes&lt;/strong&gt; (TXQR[^4]) that tolerate
missed frames. These achieve ~9 KB/s under ideal conditions but require holding
steady for 10+ seconds—acceptable for crypto wallet signing, but too ceremonial
for casual use.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The fork in the road&lt;/strong&gt;: Animated QRs solve the transport problem—&amp;quot;how do I
move 2.5KB through a QR code?&amp;quot; I needed to solve the meaning problem—&amp;quot;do I need
2.5KB?&amp;quot;&lt;/p&gt;
&lt;p&gt;I looked at existing libraries like &lt;strong&gt;sdp-compact&lt;/strong&gt;, which strip whitespace and
apply standard compression. But they still hit the &amp;quot;Generic Compression
Limit&amp;quot;—the overhead of headers and Base64 encoding often outweighed the savings
for small payloads.&lt;/p&gt;
&lt;p&gt;[^2]:
Franklin Ta, &amp;quot;Serverless WebRTC using QR codes&amp;quot; (2014),
https://franklinta.com/2014/10/19/serverless-webrtc-using-qr-codes/&lt;/p&gt;
&lt;p&gt;[^3]: webrtc-via-qr GitHub repository, https://github.com/Qivex/webrtc-via-qr&lt;/p&gt;
&lt;p&gt;[^4]: TXQR: Transfer via QR with fountain codes, https://github.com/divan/txqr&lt;/p&gt;
&lt;h2&gt;The Hack That Worked&lt;/h2&gt;
&lt;p&gt;Analyzing the SDP structure revealed what was actually needed: ICE credentials,
DTLS fingerprint, setup value, and ICE candidates. Everything else—session
description, bundling info, SCTP parameters—could be hardcoded on both ends.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Quick glossary for the uninitiated:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ICE&lt;/strong&gt; (Interactive Connectivity Establishment): The protocol that figures
out how two devices can reach each other across networks, firewalls, and NATs&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ICE candidates&lt;/strong&gt;: Network addresses (IP + port) where a device can
potentially be reached&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DTLS&lt;/strong&gt; (Datagram TLS): Encryption layer for WebRTC—like HTTPS but for
real-time data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DTLS fingerprint&lt;/strong&gt;: A hash of the device&apos;s security certificate, used to
verify you&apos;re talking to the right peer&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;First insight: Hardcode the ICE credentials.&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;a=ice-ufrag:eP8j
a=ice-pwd:3K9m...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These are the ICE &amp;quot;username fragment&amp;quot; (ufrag) and password—random strings that
peers exchange to authenticate connectivity checks. 50 bytes of high-entropy
data—impossible to compress. I asked: &amp;quot;Can I hardcode these? What breaks?&amp;quot;&lt;/p&gt;
&lt;p&gt;Digging into &lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc5245&quot;&gt;RFC 5245&lt;/a&gt; revealed
the answer. ICE credentials authenticate connectivity checks between peers, but
the &lt;em&gt;real&lt;/em&gt; security comes from the DTLS fingerprint[^5]—a SHA-256 hash of the
device&apos;s TLS certificate. An attacker with ICE credentials but the wrong
certificate cannot connect; the DTLS handshake fails.&lt;/p&gt;
&lt;p&gt;[^5]:
RFC 8827 - WebRTC Security Architecture,
https://datatracker.ietf.org/doc/html/rfc8827&lt;/p&gt;
&lt;p&gt;I hardcoded them:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;const ICE_UFRAG = &amp;quot;palabreja&amp;quot;;
const ICE_PWD = &amp;quot;xK9...........cB0&amp;quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Saved: 50 bytes.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Second insight: Filter candidates.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Browsers emit 15-30 ICE candidates—every network interface: Wi-Fi, VPN, Docker,
link-local IPv6. Most fail or connect slowly. But my first test with a single
candidate failed—the VPN interface appeared first, hiding the Wi-Fi address that
could actually connect.&lt;/p&gt;
&lt;p&gt;I raised the limit to 3 &amp;quot;host&amp;quot; candidates (local network addresses) plus 1
&amp;quot;srflx&amp;quot; (server-reflexive) candidate. The srflx candidate is your public IP
address as seen from the internet, discovered by asking a STUN server &amp;quot;what&apos;s my
IP?&amp;quot; This handles the case where devices are on different networks.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Saved: 1,200+ bytes.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Third insight: Binary protocol.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I stared at the minified JSON I was transmitting. Brackets. Quotes. Key names.
The string &lt;code&gt;&amp;quot;type&amp;quot;&lt;/code&gt; appeared in every message—5 bytes to encode something that
could only ever be &amp;quot;offer&amp;quot; or &amp;quot;answer&amp;quot;. The fingerprint was a 95-character hex
string with colons, but underneath it was just 32 bytes of raw data.&lt;/p&gt;
&lt;p&gt;JSON is designed for &lt;em&gt;interoperability&lt;/em&gt;—human-readable, self-describing,
universally parseable. But I controlled both endpoints and wrote encoder and
decoder. Nothing needed to be human-readable or self-describing.&lt;/p&gt;
&lt;p&gt;I remembered studying low-level networking—how TCP headers pack flags, sequence
numbers, and ports into fixed positions. No field names. No delimiters. Just
bytes at known offsets. What if I designed a packet format instead of a JSON
object?&lt;/p&gt;
&lt;p&gt;Strip everything constant. Keep only what&apos;s dynamic:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;┌────────┬─────────────────────┬──────────────────────────────┐
│ Byte 0 │ Bytes 1-32          │ Bytes 33+                    │
├────────┼─────────────────────┼──────────────────────────────┤
│ Type   │ DTLS Fingerprint    │ ICE Candidates (packed)      │
│ 0=offer│ SHA-256 hash        │ &amp;quot;h|u|192.168.1.5|54321|...&amp;quot;  │
└────────┴─────────────────────┴──────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;One byte for type instead of &lt;code&gt;&amp;quot;type&amp;quot;:&amp;quot;offer&amp;quot;&lt;/code&gt;. 32 raw bytes for the fingerprint
instead of 95 ASCII characters. No brackets, no quotes, no field names.&lt;/p&gt;
&lt;p&gt;But I wasn&apos;t done. The candidates were still strings: &lt;code&gt;&amp;quot;h|u|192.168.1.5|54321&amp;quot;&lt;/code&gt;.
That IP address alone is 13 characters—but an IPv4 address is just 4 bytes. Why
three ASCII characters for &lt;code&gt;192&lt;/code&gt; when &lt;code&gt;0xC0&lt;/code&gt; suffices?&lt;/p&gt;
&lt;p&gt;I pushed further. Each candidate became a fixed-layout binary structure:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;┌─────────┬────────────────┬────────┐
│ Flags   │ IP Address     │ Port   │
│ (1B)    │ (4B or 16B)    │ (2B)   │
└─────────┴────────────────┴────────┘

Flags byte (bitmask):
  Bits 0-1: Address family (00=IPv4, 01=IPv6, 10=reserved*)
  Bit 2:    Protocol (0=UDP, 1=TCP)
  Bit 3:    Candidate type (0=host, 1=srflx)
  Bits 4-5: TCP type[^6] (if TCP): 00=passive, 01=active, 10=so
  Bits 6-7: Reserved

*The reserved slot becomes important later—browser privacy features require it.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;[^6]:
RFC 6544 - TCP Candidates with Interactive Connectivity Establishment (ICE),
https://datatracker.ietf.org/doc/html/rfc6544&lt;/p&gt;
&lt;p&gt;The string &lt;code&gt;&amp;quot;h|u|192.168.1.5|54321&amp;quot;&lt;/code&gt; (21 characters) became 7 bytes. A 66%
reduction on candidate data alone—and candidates were the bulk of the payload.&lt;/p&gt;
&lt;p&gt;The full packet structure:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;┌─────────┬─────────────────┬─────────────────────────────────┐
│ Field   │ Size            │ Description                     │
├─────────┼─────────────────┼─────────────────────────────────┤
│ Type    │ 1 byte          │ 0x00 = offer, 0x01 = answer     │
│ FP      │ 32 bytes        │ DTLS fingerprint (SHA-256)      │
│ Cand 1  │ 7 bytes (IPv4)  │ Flags + IP + Port               │
│         │ 19 bytes (IPv6) │                                 │
│ Cand 2  │ 7-19 bytes      │ (repeat until end of payload)   │
│ ...     │                 │                                 │
└─────────┴─────────────────┴─────────────────────────────────┘

Typical payload: 1 + 32 + (4 × 7) = 61 bytes (4 IPv4 candidates)
Maximum payload: 1 + 32 + (4 × 19) = 109 bytes (4 IPv6 candidates)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Fourth insight: DEFLATE compression.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I applied fflate (DEFLATE level 9) to the binary payload:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Before compression: 91 bytes
After compression:  44 bytes
After base64:       60 bytes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Result: 2,487 bytes → 60 bytes. 97.6% reduction.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The QR codes scanned quickly—under a second in my tests. I had solved the
compression problem.&lt;/p&gt;
&lt;p&gt;But something bothered me. Hardcoded passwords felt &lt;em&gt;wrong&lt;/em&gt;. I&apos;d made progress,
but this was still a hack, not a protocol.&lt;/p&gt;
&lt;h2&gt;Refining the Hack&lt;/h2&gt;
&lt;p&gt;The hardcoded credentials nagged at me. It&apos;s a JavaScript website—the source
code is readable. Anyone could open DevTools, find the ICE password, and...
well, what exactly? The &lt;em&gt;real&lt;/em&gt; encryption happens in the DTLS handshake,
authenticated by the fingerprint. ICE credentials are just for routing
verification. Not critical.&lt;/p&gt;
&lt;p&gt;Still, it bothered me. Having the source code shouldn&apos;t give you the keys. Then
I realized: there&apos;s already something unique per session. The DTLS fingerprint—a
SHA-256 hash of each device&apos;s certificate—is already in the QR code. What if I
derived the ICE credentials from that?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Discovery: Derive credentials, don&apos;t hardcode them.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The solution: HKDF-SHA256[^7], a standard key derivation function. The key
insight: each peer derives &lt;em&gt;its own&lt;/em&gt; credentials from &lt;em&gt;its own&lt;/em&gt; fingerprint—not
shared credentials from a common secret.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How it works&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Peer A derives &lt;code&gt;ufrag_A&lt;/code&gt; from &lt;code&gt;Fingerprint_A&lt;/code&gt; using HKDF&lt;/li&gt;
&lt;li&gt;Peer B derives &lt;code&gt;ufrag_B&lt;/code&gt; from &lt;code&gt;Fingerprint_B&lt;/code&gt; using HKDF&lt;/li&gt;
&lt;li&gt;QR codes exchange both fingerprints&lt;/li&gt;
&lt;li&gt;Each peer can locally compute the other&apos;s expected credentials for validation&lt;/li&gt;
&lt;li&gt;ICE connectivity checks use standard username format:
&lt;code&gt;ufrag_remote:ufrag_local&lt;/code&gt;[^8]&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;HKDF parameters&lt;/strong&gt; (for implementers):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;// Salt is empty because the entropy source (DTLS certificate) is already
// high-entropy and ephemeral—no additional randomness needed
const salt = new Uint8Array(0);
const ufragInfo = new TextEncoder().encode(&amp;quot;QWBP-ICE-UFRAG-v1&amp;quot;);
const pwdInfo = new TextEncoder().encode(&amp;quot;QWBP-ICE-PWD-v1&amp;quot;);

// Derive 4 bytes for ufrag, encode as base64url (yields 6 chars, min is 4)
const ufragBytes = await hkdf(fingerprint, salt, ufragInfo, 4);
const ufrag = base64url(ufragBytes);

// Derive 18 bytes for pwd, encode as base64url (yields 24 chars, min is 22)
const pwdBytes = await hkdf(fingerprint, salt, pwdInfo, 18);
const pwd = base64url(pwdBytes);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;RFC 8839 requires ufrag ≥4 chars, pwd ≥22 chars, using &lt;code&gt;[A-Za-z0-9+/]&lt;/code&gt;.
Base64url satisfies this.&lt;/p&gt;
&lt;p&gt;This satisfies RFC 8839&apos;s entropy requirement[^9]—the randomness comes from the
ephemeral DTLS certificate, not from HKDF itself. This avoids shipping secrets
in code and guarantees per-session uniqueness as long as each connection attempt
generates a fresh certificate.&lt;/p&gt;
&lt;p&gt;Now the source code alone yields nothing. You need visual access to the specific
QR code to know that session&apos;s credentials. The security boundary shifted from
&amp;quot;secret in code&amp;quot; to &amp;quot;physical proximity required.&amp;quot;&lt;/p&gt;
&lt;p&gt;[^7]:
RFC 5869 - HMAC-based Extract-and-Expand Key Derivation Function (HKDF),
https://datatracker.ietf.org/doc/html/rfc5869&lt;/p&gt;
&lt;p&gt;[^8]:
RFC 8445, Section 7.2.2 - Forming Credentials,
https://datatracker.ietf.org/doc/html/rfc8445#section-7.2.2&lt;/p&gt;
&lt;p&gt;[^9]:
RFC 8839, Section 5.4 - ICE Password,
https://datatracker.ietf.org/doc/html/rfc8839#section-5.4&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Discovery: The compression paradox.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Testing with real Chrome and Firefox SDP data revealed a surprising result. The
binary payload—already stripped of redundancy—was high-entropy. Running DEFLATE
on it &lt;em&gt;increased&lt;/em&gt; the size:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Binary payload:     61 bytes
After compression:  83 bytes
After base64:       112 bytes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Compression header overhead exceeded entropy gains. For optimized binary data,
&lt;strong&gt;skip compression entirely&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Discovery: Base64 is a tax.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;QR codes support raw binary (Byte mode, ISO 8859-1). Most JavaScript QR
libraries accept &lt;code&gt;Uint8Array&lt;/code&gt; directly. Base64 adds 37% overhead for no benefit.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;With base64:    84 bytes → QR v5
Without base64: 61 bytes → QR v4
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I had been paying a 37% size penalty because I assumed QR codes needed text
encoding. They don&apos;t.&lt;/p&gt;
&lt;p&gt;The hack was becoming a protocol. But I still hadn&apos;t addressed the fundamental
problem.&lt;/p&gt;
&lt;h2&gt;Friday Morning: The &amp;quot;Return Trip&amp;quot; Problem&lt;/h2&gt;
&lt;p&gt;I had optimized the &lt;em&gt;offer&lt;/em&gt;. But WebRTC requires bidirectional exchange—the
receiver must send an &lt;em&gt;answer&lt;/em&gt; back.&lt;/p&gt;
&lt;p&gt;In a serverless PWA environment:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Device A cannot listen for incoming connections (browsers are clients, not
servers)&lt;/li&gt;
&lt;li&gt;Unsolicited DTLS packets from Device B are dropped&lt;/li&gt;
&lt;li&gt;ICE authentication prevents connectivity without both peers knowing each
other&apos;s credentials&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;You cannot establish a WebRTC connection with a single unidirectional scan.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I explored alternatives:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Bluetooth:&lt;/strong&gt; Web Bluetooth API cannot act as a peripheral (server role).
PWAs can only be central devices, meaning both phones would try to &lt;em&gt;connect&lt;/em&gt;,
neither would &lt;em&gt;listen&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NFC:&lt;/strong&gt; Web NFC cannot emulate tags. Both phones would try to &lt;em&gt;read&lt;/em&gt;, neither
would &lt;em&gt;write&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Audio data transfer:&lt;/strong&gt; Requires microphone permission. Unreliable in noisy
environments. Users would rightfully be suspicious.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Wi-Fi Direct:&lt;/strong&gt; No Web API exists.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Every alternative either demanded a server or required permissions that would
spook users.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The only universal, permission-friendly I/O channel available to PWAs is
bidirectional QR code scanning.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I called it the &lt;strong&gt;&amp;quot;QR Tango&amp;quot;&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Device A displays QR code&lt;/li&gt;
&lt;li&gt;Device B scans it, then displays &lt;em&gt;its&lt;/em&gt; QR code&lt;/li&gt;
&lt;li&gt;Device A scans Device B&apos;s QR code&lt;/li&gt;
&lt;li&gt;Connection establishes&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;But this introduced a new problem.&lt;/p&gt;
&lt;h2&gt;The Glare Problem&lt;/h2&gt;
&lt;p&gt;If both users press &amp;quot;Connect&amp;quot; simultaneously, both phones generate &lt;em&gt;offers&lt;/em&gt;.
WebRTC&apos;s state machine crashes when it receives an offer while in the
&amp;quot;have-local-offer&amp;quot; state.&lt;/p&gt;
&lt;p&gt;The obvious solution: designate one device as &amp;quot;sender&amp;quot; and one as &amp;quot;receiver.&amp;quot;
Consider the UX: Most Palabreja players are 50+ years old. They know how to scan
a QR code—that&apos;s intuitive. But explaining &amp;quot;first you press Send, then they scan
your code, then they press Receive, then you scan their code, and it has to be
in that order&amp;quot;? That&apos;s not intuitive. That&apos;s a support nightmare. It felt
broken.&lt;/p&gt;
&lt;p&gt;I wanted one button: &amp;quot;Connect.&amp;quot; Both users press it. Both scan. It just works.&lt;/p&gt;
&lt;p&gt;But that reintroduces the technical problem. I needed role assignment. And if
roles are encoded in the QR code, you get race conditions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;User A displays &amp;quot;Offer&amp;quot; QR&lt;/li&gt;
&lt;li&gt;User B displays &amp;quot;Offer&amp;quot; QR&lt;/li&gt;
&lt;li&gt;Neither can proceed&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Or worse—&amp;quot;stale QRs&amp;quot;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;User A displays &amp;quot;Offer&amp;quot; QR&lt;/li&gt;
&lt;li&gt;User B scans it, role updates to &amp;quot;Answerer&amp;quot;&lt;/li&gt;
&lt;li&gt;Screen refreshes with &amp;quot;Answer&amp;quot; QR&lt;/li&gt;
&lt;li&gt;User A scans the &lt;em&gt;old&lt;/em&gt; cached QR before it updates&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I kept asking: how do I remove the offer/answer byte from the protocol header?
Every approach led to the same problem—the protocol needs to know who acts as
offerer and who acts as answerer. It seemed fundamental to WebRTC&apos;s state
machine.&lt;/p&gt;
&lt;p&gt;Then it clicked. I&apos;d already solved a similar problem with ICE
credentials—deriving them from data already in the payload instead of
transmitting them separately. What if I did the same for role assignment?&lt;/p&gt;
&lt;p&gt;The fingerprints. They&apos;re unique per device. They&apos;re already in the QR code. And
crucially: two different fingerprints are never equal. One is always higher than
the other when compared byte-by-byte. If they &lt;em&gt;are&lt;/em&gt; equal, you&apos;re scanning your
own QR code—an error the protocol should catch anyway.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The breakthrough: Symmetric identity exchange.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Instead of encoding &amp;quot;Offer&amp;quot; or &amp;quot;Answer,&amp;quot; both QR codes contain only &lt;strong&gt;identity&lt;/strong&gt;
(fingerprint) and &lt;strong&gt;location&lt;/strong&gt; (IP addresses)—like business cards. After both
scans complete, each device has both fingerprints. Roles are assigned
&lt;em&gt;deterministically&lt;/em&gt; by comparison:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;if (localFingerprint &amp;gt; remoteFingerprint) {
  // Higher fingerprint ID → Offerer
  role = &amp;quot;OFFERER&amp;quot;;
} else if (localFingerprint &amp;lt; remoteFingerprint) {
  // Lower fingerprint ID → Answerer
  role = &amp;quot;ANSWERER&amp;quot;;
} else {
  // Same fingerprint → Loopback error
  throw new Error(&amp;quot;Cannot connect to self&amp;quot;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Simple byte comparison. Deterministic. No race conditions. No stale QRs.&lt;/p&gt;
&lt;p&gt;The offerer synthesizes a &amp;quot;fake&amp;quot; SDP answer locally using the answerer&apos;s
fingerprint and candidates. This satisfies the browser&apos;s state machine without
additional data transmission.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Result: Role-independent QR codes.&lt;/strong&gt; Press &amp;quot;Connect,&amp;quot; display your card, scan
theirs. Order doesn&apos;t matter.&lt;/p&gt;
&lt;h2&gt;The Browser State Paradox&lt;/h2&gt;
&lt;p&gt;Solving the glare problem introduced a subtle bug. To generate the QR code, both
devices must first gather candidates, which puts both browsers into the &amp;quot;Have
Local Offer&amp;quot; state.&lt;/p&gt;
&lt;p&gt;If the protocol decides you are the &lt;strong&gt;Answerer&lt;/strong&gt;, you have a problem: you can&apos;t
accept an Offer if you already &lt;em&gt;have&lt;/em&gt; an Offer.&lt;/p&gt;
&lt;p&gt;The naive solution is to destroy the WebRTC connection and start fresh. &lt;strong&gt;But
you can&apos;t.&lt;/strong&gt; The QR code currently displayed on your screen encodes specific
network ports (e.g., port 54321). If you destroy the connection object, the OS
closes those ports. The map you just gave your partner becomes a dead end.&lt;/p&gt;
&lt;p&gt;The solution is &lt;strong&gt;Signaling Rollback&lt;/strong&gt;. We use
&lt;code&gt;setLocalDescription({type: &apos;rollback&apos;})&lt;/code&gt; to reset the signaling state to
&lt;code&gt;stable&lt;/code&gt; while keeping the underlying ICE transport—and those precious
ports—alive. It allows the software to change its mind about who is calling whom
without the physics of the network layer noticing.&lt;/p&gt;
&lt;h2&gt;Reconstructing the SDP&lt;/h2&gt;
&lt;p&gt;Both peers now have everything needed to synthesize a complete SDP locally:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;From the QR code&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DTLS fingerprint (32 bytes)&lt;/li&gt;
&lt;li&gt;ICE candidates (3-4 packed binary structures)&lt;/li&gt;
&lt;li&gt;Remote device identity&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Generated locally&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ICE credentials (derived from fingerprints via HKDF)&lt;/li&gt;
&lt;li&gt;Role assignment (fingerprint comparison)&lt;/li&gt;
&lt;li&gt;Session metadata (timestamps, IDs)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The offerer—who already has a valid local offer pending from the gathering
phase—uses the scanned data to synthesize a &lt;strong&gt;fake Remote Answer&lt;/strong&gt;. This tricks
the browser into thinking a standard negotiation took place without actually
receiving an SDP answer packet.&lt;/p&gt;
&lt;p&gt;The answerer does the inverse: it performs a &lt;strong&gt;signaling rollback&lt;/strong&gt; (telling the
browser &amp;quot;forget that offer I just made you generate, but &lt;em&gt;keep the network ports
open&lt;/em&gt;&amp;quot;), synthesizes a &lt;strong&gt;fake Remote Offer&lt;/strong&gt; from the QR data, and then
generates a real local Answer to complete the connection.&lt;/p&gt;
&lt;p&gt;The browser sees a normal WebRTC negotiation—it&apos;s unaware the SDP came from a QR
code rather than a signaling server.&lt;/p&gt;
&lt;h2&gt;The mDNS Complication&lt;/h2&gt;
&lt;p&gt;While reviewing the protocol, one last obstacle emerged. Modern browsers hide
local IP addresses behind mDNS hostnames for privacy—instead of &lt;code&gt;192.168.1.5&lt;/code&gt;,
the browser reports something like &lt;code&gt;b124-98a7-c3d2-f1e0.local&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The problem: QWBP&apos;s binary format expects raw IPs (4 bytes for IPv4, 16 for
IPv6). A 42-character mDNS hostname doesn&apos;t fit.&lt;/p&gt;
&lt;p&gt;The solution is surprisingly elegant—and standards-compliant. WebRTC browser
implementations (following the IETF mDNS draft[^10]) mandate that mDNS hostnames
consist of &amp;quot;a version 4 UUID as defined in RFC 4122, followed by &apos;.local&apos;&amp;quot;.&lt;/p&gt;
&lt;p&gt;A UUID is 128 bits—exactly the size of an IPv6 address. The protocol doesn&apos;t
need to change the binary format; it just needs to expand the IP version flag
from 1 bit to 2 bits, encoding three states:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;00&lt;/code&gt; = IPv4 (4 bytes)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;01&lt;/code&gt; = IPv6 (16 bytes)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;10&lt;/code&gt; = mDNS UUID (16 bytes, packed as raw bytes)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This isn&apos;t a workaround—it&apos;s compliance optimization. Modern browsers (Chrome,
Safari) use this exact format for privacy[^11].&lt;/p&gt;
&lt;p&gt;[^10]:
draft-ietf-mmusic-mdns-ice-candidates-03, Section 3.1.1,
https://datatracker.ietf.org/doc/html/draft-ietf-mmusic-mdns-ice-candidates-03#section-3.1.1&lt;/p&gt;
&lt;p&gt;[^11]:
RFC 4122 - A Universally Unique IDentifier (UUID) URN Namespace,
https://datatracker.ietf.org/doc/html/rfc4122&lt;/p&gt;
&lt;p&gt;However, mDNS resolution between devices that haven&apos;t exchanged packets can be
slow or fail entirely. For the initial bootstrap, raw IPs are more reliable. On
Android and Chrome, requesting camera permission (needed anyway for QR scanning)
often causes the browser to reveal the raw local IP alongside the mDNS name.
Safari on iOS is stricter—it &lt;em&gt;only&lt;/em&gt; provides mDNS hostnames, making the UUID
packing essential rather than optional.&lt;/p&gt;
&lt;p&gt;The protocol was functionally complete. But was it &lt;em&gt;secure&lt;/em&gt;?&lt;/p&gt;
&lt;h2&gt;Threat Model: The Optical Channel&lt;/h2&gt;
&lt;p&gt;QWBP&apos;s security relies on the &lt;strong&gt;optical channel&lt;/strong&gt;—the screen displaying the QR
code.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What it protects against&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Remote attackers&lt;/strong&gt;: Cannot participate without visual access to both devices&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source code inspection&lt;/strong&gt;: Knowing the implementation doesn&apos;t reveal session
keys&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Replay attacks&lt;/strong&gt;: Ephemeral keys (DTLS certificates generated per-session)
expire after connection&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MITM attacks&lt;/strong&gt;: DTLS fingerprint verification[^12] prevents impersonation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What it assumes&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Physical proximity is the authentication factor&lt;/strong&gt;. If an attacker can
photograph both QR codes, they can potentially intercept the session (though
they&apos;d need to be on the same network segment and win the race to establish
connection first).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Short-lived sessions&lt;/strong&gt;: Keys are valid only for the current connection
attempt (~30 seconds).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Visual confirmation&lt;/strong&gt;: Users can see who they&apos;re connecting to (same room).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Optional: Short Authentication String (SAS)&lt;/strong&gt;: After connection, display a
short code (e.g., 4 words or 6 digits) derived from both fingerprints. Users
verbally confirm the code matches on both screens—this catches active MITM
attacks where an attacker substitutes their own QR. ZRTP[^13] pioneered this
pattern for voice calls; it applies equally to QWBP.&lt;/p&gt;
&lt;p&gt;[^12]:
RFC 8122, Section 5 - Fingerprint Attribute,
https://datatracker.ietf.org/doc/html/rfc8122#section-5&lt;/p&gt;
&lt;p&gt;[^13]:
RFC 6189 - ZRTP: Media Path Key Agreement for Unicast Secure RTP, Section
4.3 (SAS), https://datatracker.ietf.org/doc/html/rfc6189#section-4.3&lt;/p&gt;
&lt;h2&gt;The Bigger Picture: A Protocol, Not a Hack&lt;/h2&gt;
&lt;p&gt;Then it hit me. I&apos;d been thinking too small.&lt;/p&gt;
&lt;p&gt;A full WebRTC video call requires negotiating codecs, resolutions, bandwidth
constraints. A typical Chrome video SDP with audio, video (VP8, VP9, H.264, AV1,
H.265), and DataChannel weighs in at 6,255 bytes—sometimes more with all the
codec options. No QR code can hold that. Version 40, the largest possible, maxes
out at 2,953 bytes. A video SDP exceeds the &lt;em&gt;maximum possible QR capacity&lt;/em&gt; by
over 3KB.&lt;/p&gt;
&lt;p&gt;But the DataChannel SDP I&apos;d been compressing? That&apos;s just the &lt;em&gt;bootstrap&lt;/em&gt;. It
establishes a minimal encrypted pipe between two devices. Once that pipe exists,
you can send anything through it—including a 6KB video SDP.&lt;/p&gt;
&lt;p&gt;I wasn&apos;t building a game sync feature. I was building a &lt;strong&gt;signaling protocol&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Two-stage architecture:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────────┐
│  Layer 0: QR Bootstrap                                          │
│  ───────────────────────                                        │
│  • 55-100 bytes binary payload                                  │
│  • Fits in QR Version 4-5 (33-37 modules)                       │
│  • Establishes encrypted DataChannel                            │
│  • Scans in under 1 second (in my testing)                      │
└─────────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│  Layer 1: Application Protocol                                  │
│  ─────────────────────────────                                  │
│  • No size constraints                                          │
│  • Exchange full video/audio SDPs (6KB+)                        │
│  • Stream files of any size                                     │
│  • Run any application protocol                                 │
└─────────────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This two-stage architecture—small bootstrap leading to full capability—follows
the same pattern as Wi-Fi Easy Connect (DPP)[^14], which uses a QR code to
bootstrap secure IoT provisioning.&lt;/p&gt;
&lt;p&gt;[^14]:
Wi-Fi Alliance, &amp;quot;Wi-Fi Easy Connect Specification v3.0&amp;quot;,
https://www.wi-fi.org/discover-wi-fi/wi-fi-easy-connect&lt;/p&gt;
&lt;p&gt;The implications went beyond Palabreja:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Video calls without servers:&lt;/strong&gt; Scan a QR code, establish the bootstrap
channel, negotiate full video through it. &lt;em&gt;(The irony of setting up a video
call face-to-face is not lost on me.)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;File sharing:&lt;/strong&gt; The DataChannel can stream files of any size. A 55-byte QR
becomes a serverless AirDrop.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Device pairing:&lt;/strong&gt; IoT devices, smart home setup, any scenario where two
devices need to establish trust and a secure channel.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multiplayer games:&lt;/strong&gt; Bootstrap a mesh network between players in the same
room. No game server needed for local multiplayer.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A 55-100 byte bootstrap (a 99.12% reduction from the 6,255-byte video SDP)
unlocks full video negotiation, which unlocks unlimited bandwidth. A 4K video
call, initiated by scanning a QR code in dim lighting.&lt;/p&gt;
&lt;p&gt;This wasn&apos;t a hack anymore. It was a protocol worth naming.&lt;/p&gt;
&lt;p&gt;I called it the &lt;strong&gt;QR-WebRTC Bootstrap Protocol (QWBP)&lt;/strong&gt; — pronounced
&amp;quot;cue-web-pee&amp;quot; (&lt;code&gt;/kjuː wɛb piː/&lt;/code&gt;). Claude suggested the name; I liked it.&lt;/p&gt;
&lt;h2&gt;Why Not Animated QRs?&lt;/h2&gt;
&lt;p&gt;A fair question: if fountain codes can reliably transfer 9KB through animated
QRs, why shrink the protocol?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Three reasons:&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;1. Latency Kills Manual Signaling&lt;/h3&gt;
&lt;p&gt;Manual signaling fights browser ICE timers. Firefox&apos;s
&lt;code&gt;media.peerconnection.ice.trickle_grace_period&lt;/code&gt; (default: 5000ms) can mark
gathering failed if it doesn&apos;t receive expected candidates in time. QWBP
sidesteps this by completing ICE gathering before displaying the QR—but users
still need to scan within a reasonable window.&lt;/p&gt;
&lt;p&gt;TXQR can transfer 9KB in ~1 second under ideal conditions, but real-world
performance degrades:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Poor lighting: 15+ seconds&lt;/li&gt;
&lt;li&gt;User fumbling with camera permissions: 20+ seconds&lt;/li&gt;
&lt;li&gt;Missed frames requiring rescan: restart from zero&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By reducing payload to 55 bytes (Version 4 QR), scan time drops to
&lt;strong&gt;sub-500ms&lt;/strong&gt;—safely inside browser timeout windows.&lt;/p&gt;
&lt;h3&gt;2. UX Ceremony Tax&lt;/h3&gt;
&lt;p&gt;Animated QRs require:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Holding phone perfectly still&lt;/li&gt;
&lt;li&gt;Waiting for sequence completion&lt;/li&gt;
&lt;li&gt;Two-handed operation or phone stand&lt;/li&gt;
&lt;li&gt;Understanding what &amp;quot;3 of 12 frames captured&amp;quot; means&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Static QRs require:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Point camera&lt;/li&gt;
&lt;li&gt;Done&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For motivated users (crypto transactions), ceremony is acceptable. For casual
users (game sync), it&apos;s a support nightmare.&lt;/p&gt;
&lt;h3&gt;3. Semantic Compression Beats Transport Compression&lt;/h3&gt;
&lt;p&gt;Animated QRs compress &lt;strong&gt;at the transport layer&lt;/strong&gt;—fountain codes, LZMA, base32
encoding.&lt;/p&gt;
&lt;p&gt;QWBP compresses &lt;strong&gt;at the semantic layer&lt;/strong&gt;—understanding what ICE candidates
&lt;em&gt;mean&lt;/em&gt; achieves 97.79% reduction.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Technique&lt;/th&gt;
&lt;th&gt;Data Size&lt;/th&gt;
&lt;th&gt;Scan Time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Franklin Ta (2014)&lt;/td&gt;
&lt;td&gt;LZMA + animated&lt;/td&gt;
&lt;td&gt;~1000 bytes → 10 QR codes&lt;/td&gt;
&lt;td&gt;10-15 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TXQR&lt;/td&gt;
&lt;td&gt;Fountain codes&lt;/td&gt;
&lt;td&gt;9KB → 30 QR codes&lt;/td&gt;
&lt;td&gt;1-10 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;BBQr&lt;/td&gt;
&lt;td&gt;Chunking + base32&lt;/td&gt;
&lt;td&gt;3KB → 12 QR codes&lt;/td&gt;
&lt;td&gt;5-12 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;QWBP&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Binary protocol&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;55 bytes → 1 QR code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;&amp;lt;0.5 sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;When you control both endpoints, &lt;strong&gt;domain knowledge is a compression
algorithm&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;The Final Protocol&lt;/h2&gt;
&lt;p&gt;By Friday afternoon, I had completed the QR-WebRTC Bootstrap Protocol (QWBP)
v1.0.0.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What QWBP is (and isn&apos;t):&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DataChannel-only bootstrap (DataChannel is WebRTC&apos;s raw data pipe, separate
from audio/video)—not a general SDP replacement&lt;/li&gt;
&lt;li&gt;Optimized for two devices in physical proximity with controlled
scanner/encoder&lt;/li&gt;
&lt;li&gt;&amp;quot;Serverless&amp;quot; on LAN; requires STUN/TURN servers for cross-network scenarios
(explained later)&lt;/li&gt;
&lt;li&gt;Not designed for video/audio negotiation, mesh networks, or untrusted
environments&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;The packet structure evolved from my Thursday prototype. I added a &lt;strong&gt;Magic
Byte&lt;/strong&gt; (&lt;code&gt;0x51&lt;/code&gt; = &apos;Q&apos;) for protocol identification—so scanning a restaurant menu
QR fails fast instead of crashing—and a &lt;strong&gt;Version&lt;/strong&gt; field for future
compatibility:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;QWBP v1 Packet Structure:

┌───────────┬─────────────┬──────────────────────┬────────────────────┐
│ Magic (1B)│ Version (1B)│ Fingerprint (32B)    │ Candidates (Var)   │
│ 0x51 &apos;Q&apos;  │ Version:3b  │ SHA-256 DTLS         │ Binary-packed IPs  │
│           │ Reserved:5b │ (32 raw bytes)       │ (7B IPv4, 19B IPv6)│
└───────────┴─────────────┴──────────────────────┴────────────────────┘

Typical size: 55-100 bytes → QR Version 4-5 (33-37 modules)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Connection flow:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Both peers generate DTLS certificate and gather ICE candidates&lt;/li&gt;
&lt;li&gt;Both encode identity + location → display QR code&lt;/li&gt;
&lt;li&gt;Peer A scans Peer B&apos;s QR (order irrelevant)&lt;/li&gt;
&lt;li&gt;Peer B scans Peer A&apos;s QR&lt;/li&gt;
&lt;li&gt;Both compare fingerprints → determine roles&lt;/li&gt;
&lt;li&gt;Both synthesize appropriate SDP locally&lt;/li&gt;
&lt;li&gt;DTLS handshake + ICE connectivity check&lt;/li&gt;
&lt;li&gt;DataChannel established&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;ICE Gathering:&lt;/strong&gt; Unlike standard WebRTC (which uses &amp;quot;Trickle ICE&amp;quot; to send
candidates as they&apos;re discovered), QWBP waits for complete ICE gathering before
encoding the QR. Implementation must wait for &lt;code&gt;iceGatheringState: &apos;complete&apos;&lt;/code&gt;.
This adds 1-2 seconds of latency but ensures the QR contains all candidates
needed for connection—better than fast QR generation with failed scans.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Final optimization decisions:&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Decision&lt;/th&gt;
&lt;th&gt;Rationale&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Derive ICE credentials via HKDF&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Per-session uniqueness without transmission overhead.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Skip compression&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High-entropy binary data expands under DEFLATE.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Skip base64&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;QR codes support raw binary natively.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;3 host + 1 srflx candidates&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Handles VPN, tethering, and cross-network scenarios.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Symmetric identity exchange&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Eliminates race conditions and role assignment complexity.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;mDNS as UUID in IPv6 slot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Preserves binary format while supporting browser privacy features.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;The Compression Journey&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Stage&lt;/th&gt;
&lt;th&gt;Bytes&lt;/th&gt;
&lt;th&gt;QR Version&lt;/th&gt;
&lt;th&gt;Scan Time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Standard WebRTC SDP&lt;/td&gt;
&lt;td&gt;2,487&lt;/td&gt;
&lt;td&gt;v34-40&lt;/td&gt;
&lt;td&gt;10+ sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Remove boilerplate&lt;/td&gt;
&lt;td&gt;820&lt;/td&gt;
&lt;td&gt;v20&lt;/td&gt;
&lt;td&gt;6 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hardcode credentials&lt;/td&gt;
&lt;td&gt;770&lt;/td&gt;
&lt;td&gt;v20&lt;/td&gt;
&lt;td&gt;6 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Filter candidates&lt;/td&gt;
&lt;td&gt;210&lt;/td&gt;
&lt;td&gt;v9&lt;/td&gt;
&lt;td&gt;3 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Binary format&lt;/td&gt;
&lt;td&gt;91&lt;/td&gt;
&lt;td&gt;v5&lt;/td&gt;
&lt;td&gt;1 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Skip base64&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;55-100&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;v4-5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;&amp;lt;0.5 sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;97.79% reduction.&lt;/strong&gt; In my testing, Version 4 QR codes scanned in under a
second across varied lighting conditions—a significant improvement over the v30+
codes I started with.&lt;/p&gt;
&lt;p&gt;The QR codes use &lt;strong&gt;Error Correction Level L&lt;/strong&gt; (7% recovery). For binary data
displayed on screens—high contrast, no physical damage—Level L minimizes size
while remaining scannable. Higher levels (M at 15%, H at 30%) would push v4
codes back to v5-6, defeating the optimization work.&lt;/p&gt;
&lt;h2&gt;A Note on &amp;quot;Serverless&amp;quot;&lt;/h2&gt;
&lt;p&gt;The protocol works without servers on the same local network—both devices use
their LAN IP addresses (host candidates) and connect directly.&lt;/p&gt;
&lt;p&gt;For cross-network scenarios (one device on Wi-Fi, another on 5G), you need a
&lt;strong&gt;STUN server&lt;/strong&gt;[^15] to discover public IPs. STUN (Session Traversal Utilities
for NAT) is simple: your device asks &amp;quot;what&apos;s my public IP?&amp;quot; and the server
responds. Public STUN servers like &lt;code&gt;stun:stun.l.google.com:19302&lt;/code&gt; are free,
stateless, and don&apos;t relay your data—they just answer that one question. You
don&apos;t deploy or maintain them.&lt;/p&gt;
&lt;p&gt;[^15]:
RFC 8489 - Session Traversal Utilities for NAT (STUN),
https://datatracker.ietf.org/doc/html/rfc8489&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The QR Tango solves single symmetric NAT.&lt;/strong&gt; This was a pleasant discovery. NAT
(Network Address Translation) is how your router lets multiple devices share one
public IP—but it creates problems for peer-to-peer connections because devices
can&apos;t directly reach each other. Symmetric NAT[^16] is the strictest type—it
won&apos;t accept incoming packets until the device sends one first. Traditional
WebRTC signaling struggles here because one side waits for the other.&lt;/p&gt;
&lt;p&gt;But with QWBP, both devices have complete connection information from the QR
codes. Both can fire packets simultaneously. When Device A sends to Device B,
Device A&apos;s NAT opens a &amp;quot;hole&amp;quot; for return traffic. Device B does the same. The
packets cross in flight, each NAT sees outgoing traffic, and both allow the
responses through. This is called &amp;quot;simultaneous open&amp;quot; or hole punching[^17]—and
it works because neither device is waiting for the other.&lt;/p&gt;
&lt;p&gt;For symmetric NAT on &lt;em&gt;both&lt;/em&gt; sides, a &lt;strong&gt;TURN relay&lt;/strong&gt; is still needed. TURN
(Traversal Using Relays around NAT) is a server that both devices connect to,
which then forwards traffic between them—a last resort when direct connection is
impossible. Neither peer can predict what port their NAT will assign for the
other destination—it&apos;s a deadlock that even simultaneous transmission can&apos;t
solve. This affects maybe 10% of connections, mostly on enterprise WiFi and
carrier-grade NAT. An acknowledged limitation.&lt;/p&gt;
&lt;p&gt;[^16]:
RFC 4787 - Network Address Translation (NAT) Behavioral Requirements for
Unicast UDP, https://datatracker.ietf.org/doc/html/rfc4787&lt;/p&gt;
&lt;p&gt;[^17]:
RFC 5128 - State of Peer-to-Peer (P2P) Communication across Network Address
Translators (NATs), https://datatracker.ietf.org/doc/html/rfc5128&lt;/p&gt;
&lt;h2&gt;When It Fails&lt;/h2&gt;
&lt;p&gt;QWBP handles most same-network scenarios, but some failures are expected:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Same Wi-Fi but won&apos;t connect:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;VPN active on one device → try disabling VPN or use mobile hotspot&lt;/li&gt;
&lt;li&gt;Enterprise firewall blocking peer traffic → TURN relay required&lt;/li&gt;
&lt;li&gt;iOS local network permission denied → check Settings &amp;gt; Privacy &amp;gt; Local Network&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;QR scanned but nothing happens:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Scanned a menu/URL QR → magic byte validation rejects non-QWBP codes&lt;/li&gt;
&lt;li&gt;Session expired → the 30-second timeout passed; regenerate QR and try again&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Connection drops immediately:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DTLS handshake failed → certificates may have regenerated; restart both
devices&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Glare still possible?&lt;/strong&gt; No. Fingerprint comparison deterministically assigns
roles after both scans complete. If both devices compute the same role (only
possible with identical fingerprints = scanning yourself), the protocol throws
an error.&lt;/p&gt;
&lt;h2&gt;What I Learned&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Semantic compression beats generic compression.&lt;/strong&gt; Understanding what data is
&lt;em&gt;actually needed&lt;/em&gt; achieves 97% reduction. DEFLATE on the original SDP: 60%
reduction. Domain knowledge: 97.79%.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;Best practices&amp;quot; assume interoperability.&lt;/strong&gt; ICE credentials exist because
generic WebRTC implementations can&apos;t trust the signaling channel. When you
control both endpoints and authenticate via QR scan, the threat model changes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Physics constrains design.&lt;/strong&gt; I spent Thursday evening optimizing compression
before realizing the return trip—not payload size—was the real problem.
Bidirectional QR scanning wasn&apos;t a workaround; it was the only viable serverless
channel.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dialogue beats solitary genius.&lt;/strong&gt; The protocol emerged from conversation, not
isolation. More on this below.&lt;/p&gt;
&lt;h2&gt;What&apos;s Next&lt;/h2&gt;
&lt;p&gt;The protocol works for any WebRTC project needing QR-based signaling. The
techniques apply to any protocol where you control both endpoints.&lt;/p&gt;
&lt;p&gt;I&apos;ve published a formal specification, a TypeScript library, and a live demo:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/magarcia/qwbp/blob/main/SPECIFICATION.md&quot;&gt;QWBP Specification&lt;/a&gt;&lt;/strong&gt;
— The complete protocol reference&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.npmjs.com/package/qwbp&quot;&gt;qwbp on npm&lt;/a&gt;&lt;/strong&gt; — Drop-in
TypeScript/JavaScript library&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://magarcia.github.io/qwbp&quot;&gt;Live Demo&lt;/a&gt;&lt;/strong&gt; — Try it between two devices
right now&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you build something with QWBP, I&apos;d love to hear about it.&lt;/p&gt;
&lt;h2&gt;Rubber Ducking with a Robot&lt;/h2&gt;
&lt;p&gt;I should be transparent about how this protocol came together: I didn&apos;t design
it alone. I designed it in conversation with Claude, Anthropic&apos;s AI assistant.&lt;/p&gt;
&lt;p&gt;It started with a problem: &amp;quot;I have a PWA with no backend, and a user wants to
sync their game progress to a new phone.&amp;quot; I shared this with Claude, and we
started exploring options. WebRTC looked promising but the signaling overhead
seemed insurmountable. Over the course of several sessions—Thursday evening into
Friday morning—the conversation evolved from &amp;quot;this is impossible&amp;quot; to &amp;quot;wait, what
if we just...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What AI did well:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Research at conversation speed.&lt;/strong&gt; When I asked &amp;quot;can I hardcode ICE
credentials?&amp;quot;, Claude pulled the relevant RFC sections and explained the
security implications in seconds. When I wondered if Web Bluetooth could work,
Claude systematically eliminated it by citing specific browser API
limitations. This kind of RFC-diving and compatibility research would have
taken me hours or days.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Provided resistance to push against.&lt;/strong&gt; Claude kept insisting the
&amp;quot;offer/answer&amp;quot; distinction was fundamental to WebRTC—you need an offer, you
need an answer, that&apos;s how it works. That resistance forced me to articulate
&lt;em&gt;why&lt;/em&gt; I thought we could do better, until I asked: &amp;quot;What if we infer the roles
from something already in the QR?&amp;quot; That question—mine, born from frustration
with the constraint—led to the symmetric fingerprint comparison that
eliminated race conditions. Sometimes AI is most useful when it&apos;s wrong.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Validated security decisions.&lt;/strong&gt; When I proposed deriving ICE credentials
from the DTLS fingerprint, I wasn&apos;t sure if I was introducing vulnerabilities.
Claude analyzed the threat model and confirmed the real security boundary is
the DTLS handshake, not the ICE layer—the change was safe.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Caught things I missed.&lt;/strong&gt; The &amp;quot;compression paradox&amp;quot; (DEFLATE making the
payload &lt;em&gt;larger&lt;/em&gt;) emerged when Claude ran the actual numbers. I would have
assumed compression always helps.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What AI didn&apos;t do:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Make architectural decisions.&lt;/strong&gt; Every design choice—the binary format, the
QR Tango UX, the candidate limits—came from me asking &amp;quot;what if?&amp;quot; and Claude
helping me evaluate the tradeoffs. The AI never said &amp;quot;here&apos;s the design.&amp;quot; It
said &amp;quot;here&apos;s what happens if you do X.&amp;quot;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Replace domain intuition.&lt;/strong&gt; Knowing that a 55-byte payload &amp;quot;feels&amp;quot; right for
QR codes, or that users over 50 won&apos;t tolerate animated QR sequences—that came
from building products, not from prompting.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;The honest assessment:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Without AI, I probably would have given up after a few hours. This wasn&apos;t a
critical problem—I could have told the user &amp;quot;sorry, this is not possible&amp;quot; and
moved on. No one was demanding a solution. But because each question got an
answer in seconds instead of hours, I kept going. Each small breakthrough made
the next question worth asking. The momentum carried me through problems I would
have abandoned.&lt;/p&gt;
&lt;p&gt;Reading RFCs, testing browser quirks, validating security assumptions—weeks of
unglamorous work. With AI, I compressed it into a day. Not because AI is
smarter, but because it&apos;s &lt;em&gt;faster at the boring parts&lt;/em&gt;, and that speed changes
what feels worth attempting.&lt;/p&gt;
&lt;p&gt;The experience felt like pair programming with someone who has read every RFC
but has no opinions. I drove the architecture. Claude drove the research. When I
got stuck, I&apos;d describe the problem out loud (rubber ducking), and Claude would
either confirm my instinct or point out something I&apos;d missed.&lt;/p&gt;
&lt;h2&gt;Appendix: Quick Reference&lt;/h2&gt;
&lt;p&gt;For the complete specification, see the
&lt;a href=&quot;https://github.com/magarcia/qwbp/blob/main/SPECIFICATION.md&quot;&gt;QWBP Specification&lt;/a&gt;.
Here&apos;s a quick reference for the binary format.&lt;/p&gt;
&lt;h3&gt;Packet Structure&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;┌───────────┬─────────────┬──────────────────────┬────────────────────┐
│ Magic (1B)│ Version (1B)│ Fingerprint (32B)    │ Candidates (Var)   │
│ 0x51 &apos;Q&apos;  │ Version:3b  │ SHA-256 DTLS         │ Binary-packed IPs  │
│           │ Reserved:5b │ (32 raw bytes)       │ (7B IPv4, 19B IPv6)│
└───────────┴─────────────┴──────────────────────┴────────────────────┘
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Flags Byte Layout&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Bits&lt;/th&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Values&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0-1&lt;/td&gt;
&lt;td&gt;Address Family&lt;/td&gt;
&lt;td&gt;&lt;code&gt;00&lt;/code&gt;=IPv4, &lt;code&gt;01&lt;/code&gt;=IPv6, &lt;code&gt;10&lt;/code&gt;=mDNS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Protocol&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0&lt;/code&gt;=UDP, &lt;code&gt;1&lt;/code&gt;=TCP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Candidate Type&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0&lt;/code&gt;=Host, &lt;code&gt;1&lt;/code&gt;=srflx&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4-5&lt;/td&gt;
&lt;td&gt;TCP Type&lt;/td&gt;
&lt;td&gt;&lt;code&gt;00&lt;/code&gt;=passive, &lt;code&gt;01&lt;/code&gt;=active, &lt;code&gt;10&lt;/code&gt;=so&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6-7&lt;/td&gt;
&lt;td&gt;Reserved&lt;/td&gt;
&lt;td&gt;Must be &lt;code&gt;0&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Test Vector&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Minimal valid packet (1 IPv4 host candidate):&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Hex: 51 00 [32 bytes fingerprint] 00 C0A80105 D431
     ^  ^   ^                       ^  ^        ^
     |  |   |                       |  |        Port 54321
     |  |   |                       |  IP 192.168.1.5
     |  |   |                       Flags: IPv4, UDP, host
     |  |   DTLS fingerprint (SHA-256)
     |  Version 0
     Magic byte &apos;Q&apos;

Total: 1 + 1 + 32 + 1 + 4 + 2 = 41 bytes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Decoded candidate line:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;a=candidate:1 1 udp 2122260223 192.168.1.5 54321 typ host
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;lt;br /&amp;gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&amp;lt;br /&amp;gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;A user asked a simple question. I spent an evening and a morning talking to an
AI about protocol design. Being unreasonable turned out to be the only
reasonable solution.&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Want to learn more about AI-assisted development? Read about
&lt;a href=&quot;https://magarcia.io/when-ai-made-building-cheaper-than-the-meetings-to-plan-it/&quot;&gt;how AI is changing software team workflows&lt;/a&gt;
or explore
&lt;a href=&quot;https://magarcia.io/asking-ai-to-build-the-tool-instead-of-doing-the-task/&quot;&gt;techniques for building tools with AI&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Why I Switched from Bun to Deno for Claude Code Skills]]></title>
            <description><![CDATA[Bun's auto-install breaks when any node_modules directory exists in parent paths—making skills fail in monorepos and project directories. Deno's npm: specifier provides consistent behavior everywhere, making it the better choice for portable Claude Code skills.]]></description>
            <link>https://magarcia.io/why-i-switched-from-bun-to-deno-for-claude-code-skills/</link>
            <guid isPermaLink="false">https://magarcia.io/why-i-switched-from-bun-to-deno-for-claude-code-skills/</guid>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[deno]]></category>
            <category><![CDATA[bun]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[cli]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Thu, 15 Jan 2026 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;&lt;strong&gt;Bun&apos;s auto-install feature breaks when any &lt;code&gt;node_modules&lt;/code&gt; directory exists in
parent paths.&lt;/strong&gt; This makes Bun unreliable for portable Claude Code skills that
run from project directories or monorepos. After testing
&lt;a href=&quot;https://magarcia.io/writing-powerful-claude-code-skills-with-npx-bun/&quot;&gt;my previous npx bun approach&lt;/a&gt;
in real environments, I switched to Deno. Here is why Deno&apos;s &lt;code&gt;npm:&lt;/code&gt; specifier is
the better choice for self-contained TypeScript skills.&lt;/p&gt;
&lt;p&gt;Bun&apos;s auto-install only works when no &lt;code&gt;node_modules&lt;/code&gt; directory exists in the
working directory or any parent directory. When &lt;code&gt;node_modules&lt;/code&gt; is present
anywhere up the tree, Bun switches to standard Node.js module resolution.
Version specifiers in imports—the core feature that made the approach
useful—throw &lt;code&gt;VersionSpecifierNotAllowedHere&lt;/code&gt; errors:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cd ~/my-project  # has node_modules/
$ cat skill.ts
#!/usr/bin/env -S npx -y bun
import chalk from &amp;quot;chalk@^5.0.0&amp;quot;
console.log(chalk.green(&amp;quot;Hello&amp;quot;))

$ ./skill.ts
error: VersionSpecifierNotAllowedHere
  import chalk from &amp;quot;chalk@^5.0.0&amp;quot;
                    ^
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This breaks in practical scenarios. Run a skill from within a project directory?
Broken. Work in a monorepo where some ancestor has &lt;code&gt;node_modules&lt;/code&gt;? Broken. Your
home directory happens to have an old &lt;code&gt;node_modules&lt;/code&gt; from a forgotten
experiment? Broken.&lt;/p&gt;
&lt;p&gt;For portable Claude Code skills that might run from anywhere, this is a footgun.
The script works when you test it in &lt;code&gt;~/.claude/skills/&lt;/code&gt;, then fails
mysteriously when Claude invokes it from a different directory. The error
message obscures the problem—diagnosing it requires understanding Bun&apos;s internal
resolution logic.&lt;/p&gt;
&lt;p&gt;Credit for the solution goes to
&lt;a href=&quot;https://www.threads.net/@jwynia/post/DTgiB62DaxN&quot;&gt;J Edward Wynia&lt;/a&gt;, who pointed
me toward Deno in response to that article. I forget why I skipped Deno
initially—probably because Bun&apos;s syntax looked cleaner—but the suggestion was
right.&lt;/p&gt;
&lt;h2&gt;Why Deno Solves This&lt;/h2&gt;
&lt;p&gt;Deno&apos;s &lt;code&gt;npm:&lt;/code&gt; specifier works regardless of whether &lt;code&gt;node_modules&lt;/code&gt; exists.
Dependencies always go to Deno&apos;s global cache at &lt;code&gt;~/.cache/deno&lt;/code&gt;. Local
&lt;code&gt;node_modules&lt;/code&gt; directories don&apos;t affect resolution—though you need the
&lt;code&gt;--node-modules-dir=false&lt;/code&gt; flag to ensure this behavior when running from
directories that already have a &lt;code&gt;node_modules&lt;/code&gt; folder. Consistent behavior
everywhere.&lt;/p&gt;
&lt;p&gt;The same &lt;code&gt;npx&lt;/code&gt; distribution trick works. Just like &lt;code&gt;npx -y bun&lt;/code&gt;, you can use
&lt;code&gt;npx -y deno&lt;/code&gt; to run Deno without installing it globally. Any environment with
npm can execute Deno scripts.&lt;/p&gt;
&lt;p&gt;One caveat: if Deno is already installed on your system, &lt;code&gt;npx -y deno&lt;/code&gt; still
downloads a separate copy to npm&apos;s cache (~40MB, comparable to Bun&apos;s ~100MB
first-download cost). For systems with Deno pre-installed, use &lt;code&gt;deno run&lt;/code&gt;
directly. The &lt;code&gt;npx&lt;/code&gt; approach targets portability—scripts that work on any
machine with npm, regardless of what&apos;s pre-installed.&lt;/p&gt;
&lt;h2&gt;The Deno Approach&lt;/h2&gt;
&lt;p&gt;Here&apos;s what a Deno-based skill looks like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;#!/usr/bin/env -S npx -y deno run --node-modules-dir=false --allow-read --allow-write

import { parse } from &amp;quot;npm:csv-parse@^5.0/sync&amp;quot;;
import chalk from &amp;quot;npm:chalk@^5.0.0&amp;quot;;
import { z } from &amp;quot;npm:zod@^3.23&amp;quot;;

const inputPath = Deno.args[0];
const content = await Deno.readTextFile(inputPath);

const rows = parse(content, { columns: true });
console.log(chalk.green(`Parsed ${rows.length} rows`));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;npm:&lt;/code&gt; prefix is more verbose than Bun&apos;s bare imports, but it clarifies
package origins. TypeScript works natively. Version pinning lives in the import
path, same as with Bun. No &lt;code&gt;deno.json&lt;/code&gt; or import map required—dependencies
resolve directly from the specifiers.&lt;/p&gt;
&lt;p&gt;Deno requires permission flags—&lt;code&gt;--allow-read&lt;/code&gt;, &lt;code&gt;--allow-write&lt;/code&gt;, &lt;code&gt;--allow-net&lt;/code&gt;,
etc. More verbose than Bun, but you declare exactly what the script does. For
skills running through Claude Code, explicit permissions document what the
script can access. For trusted environments, &lt;code&gt;--allow-all&lt;/code&gt; (or &lt;code&gt;-A&lt;/code&gt;) skips the
ceremony.&lt;/p&gt;
&lt;h2&gt;Trade-offs&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Bun&lt;/th&gt;
&lt;th&gt;Deno&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Import syntax&lt;/td&gt;
&lt;td&gt;&lt;code&gt;import x from &amp;quot;pkg@1.0&amp;quot;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;import x from &amp;quot;npm:pkg@1.0&amp;quot;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;node_modules safe&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Raw performance&lt;/td&gt;
&lt;td&gt;~20-30% faster&lt;/td&gt;
&lt;td&gt;Slightly slower&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TypeScript&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Permissions model&lt;/td&gt;
&lt;td&gt;Permissive by default&lt;/td&gt;
&lt;td&gt;Explicit flags required&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Bun is faster. Startup time, runtime performance, HTTP serving—Bun consistently
beats Deno in benchmarks. If you&apos;re building a production API or a
performance-critical CLI tool, that matters.&lt;/p&gt;
&lt;p&gt;For Claude Code skills, it doesn&apos;t.&lt;/p&gt;
&lt;h2&gt;Why Performance Doesn&apos;t Matter Here&lt;/h2&gt;
&lt;p&gt;The agent&apos;s thinking time dwarfs script execution time. Claude takes two to five
seconds to decide what to do next. A skill that runs in 50 milliseconds versus
80 milliseconds is effectively the same—both are instant compared to the agent&apos;s
decision loop.&lt;/p&gt;
&lt;p&gt;Reliability matters more. A skill that works from any directory is more valuable
than a skill that&apos;s 30% faster but breaks in monorepos.&lt;/p&gt;
&lt;h2&gt;Practical Example for Skills&lt;/h2&gt;
&lt;p&gt;The structure follows the same pattern from the
&lt;a href=&quot;https://magarcia.io/writing-powerful-claude-code-skills-with-npx-bun/&quot;&gt;original article&lt;/a&gt;—a
&lt;code&gt;SKILL.md&lt;/code&gt; pointing to executable scripts. The only changes are the shebang and
Deno-specific APIs:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;#!/usr/bin/env -S npx -y deno run --node-modules-dir=false --allow-read --allow-write

import { parse } from &amp;quot;npm:csv-parse@^5.0/sync&amp;quot;;
import * as XLSX from &amp;quot;npm:xlsx@^0.20&amp;quot;;

const inputPath = Deno.args[0];
const content = await Deno.readTextFile(inputPath);

const rows = parse(content, { columns: true });
console.log(JSON.stringify(rows, null, 2));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Claude runs the skill, the script accesses npm packages, and everything works
regardless of directory.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;npm:&lt;/code&gt; prefix is more verbose. Permission flags add ceremony. Bun&apos;s import
syntax is cleaner and faster. But Deno&apos;s reliability across different directory
structures makes it the better choice for Claude Code skills.&lt;/p&gt;
&lt;p&gt;You don&apos;t have to debug why a skill works in one directory and fails in another.
You don&apos;t have to document &amp;quot;this only works outside of projects with
node_modules.&amp;quot; The script just works.&lt;/p&gt;
&lt;p&gt;If Bun adds a flag to force auto-install regardless of &lt;code&gt;node_modules&lt;/code&gt; presence,
I&apos;d reconsider. Until then, Deno&apos;s consistency wins.&lt;/p&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://magarcia.io/writing-powerful-claude-code-skills-with-npx-bun/&quot;&gt;Writing Powerful Claude Code Skills with npx bun&lt;/a&gt;
— The original exploration of this approach&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://deno.land&quot;&gt;Deno — A modern runtime for JavaScript and TypeScript&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.deno.com/runtime/fundamentals/node/#using-npm-packages&quot;&gt;Deno npm compatibility&lt;/a&gt;
— How the &lt;code&gt;npm:&lt;/code&gt; specifier works&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://bun.sh/docs/runtime/auto-install&quot;&gt;Bun Auto-Install Documentation&lt;/a&gt; —
Understanding when auto-install activates&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.anthropic.com/en/docs/claude-code/skills&quot;&gt;Claude Code Skills Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[YumML: A Human-Readable YAML Recipe Format for the AI Era]]></title>
            <description><![CDATA[YumML is an open-source recipe format based on YAML that is readable by humans, parseable by machines, and optimized for AI assistants. Learn why existing recipe formats fail and how YumML solves these problems.]]></description>
            <link>https://magarcia.io/yumml-recipe-format/</link>
            <guid isPermaLink="false">https://magarcia.io/yumml-recipe-format/</guid>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sat, 10 Jan 2026 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;&lt;strong&gt;YumML&lt;/strong&gt; is a human-readable recipe format based on YAML that makes cooking
recipes easy to read, write, and parse. Unlike XML-based formats like RecipeML
or proprietary formats, YumML prioritizes readability while remaining fully
machine-parseable and AI-friendly.&lt;/p&gt;
&lt;p&gt;Years ago, talking with a friend, we realized that no standard format exists for
saving cooking recipes. At that time I was learning React, and one of my early
projects was a recipe management app. I didn&apos;t succeed (like many projects I
started and never finished), but along the way I found a draft spec for a recipe
format: &lt;strong&gt;YumML&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&amp;lt;br /&amp;gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgs.xkcd.com/comics/standards.png&quot; alt=&quot;XKCD comic 927: A person explains how there are 14 competing standards, so they create a new universal one. Result: 15 competing standards.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Yes, I&apos;m fully aware of the irony. &lt;a href=&quot;https://xkcd.com/927&quot;&gt;xkcd.com/927&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&amp;lt;br /&amp;gt;&lt;/p&gt;
&lt;p&gt;Paul Jenkins published this format on &lt;a href=&quot;http://vikingco.de/&quot;&gt;vikingco.de&lt;/a&gt;, but
the site is now offline. The only references available are from the
&lt;a href=&quot;https://web.archive.org/web/20160730232450/http://vikingco.de/&quot;&gt;Internet Archive&lt;/a&gt;
and the
&lt;a href=&quot;https://github.com/vikingcode/vikingcode.github.io&quot;&gt;repository of the page&lt;/a&gt;
that remains available.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be
interpreted as described in &lt;a href=&quot;https://tools.ietf.org/html/rfc2119&quot;&gt;RFC 2119&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;History&lt;/h2&gt;
&lt;p&gt;The first draft of the YumML format is from September 21, 2011, made by Paul
Jenkins, you can check the
&lt;a href=&quot;https://web.archive.org/web/20160730232450/http://vikingco.de/yumml.html&quot;&gt;original specification here&lt;/a&gt;
that I rescued from the old archives I found.&lt;/p&gt;
&lt;p&gt;This draft is promising as a format for cooking recipes, so I want to recover it
from the forgotten archives and revive it, adding extra functionalities and
specifications.&lt;/p&gt;
&lt;h2&gt;Motivation&lt;/h2&gt;
&lt;p&gt;As the original author of YumML mentioned in his blog post, I&apos;ve investigated
cooking recipe formats and, to be clear, most suck. The landscape of recipe
interchange formats is fragmented, but most share one trait: they lack human
readability.&lt;/p&gt;
&lt;p&gt;Most cooking software ships with proprietary formats that only that software can
read. Some can be imported by other programs.
&lt;a href=&quot;http://web.archive.org/web/20151029032924/http://episoft.home.comcast.net:80/~episoft/&quot;&gt;Meal-Master&lt;/a&gt;
is one of these widely supported formats and due to that there is a
&lt;a href=&quot;http://www.ffts.com/recipes.htm&quot;&gt;huge collection of recipes&lt;/a&gt; available online.&lt;/p&gt;
&lt;p&gt;But looking at an example, the format seems poorly specified:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-txt&quot;&gt;MMMMM----- Now You&apos;re Cooking! v5.65 [Meal-Master Export Format]

      Title: Agua De Valencia
 Categories: beverages, spanish
      Yield: 4 servings

      1    bottle of spanish cava
           -(sparkling wine or; champag
           plenty fresh orange juice
           cointreau
           ice cubes

Put some ice cubes into a large jug and pour over lots of orange juice. Now
add the bottle of cava. Once the fizz subsides, stir in a good dash of the
cointreau and it&apos;s ready to serve.

  Contributor:  Esther Pérez Solsona

  NYC Nutrilink: N0^00000,N0^00000,N0^00000,N0^00000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are some other &amp;quot;famous&amp;quot; formats like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://www.formatdata.com/recipeml/index.html&quot;&gt;RecipeML&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://reml.sourceforge.net/&quot;&gt;Recipe Exchange Markup Language&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.kalorio.de/index.php?Mod=Ac&amp;amp;Cap=CE&amp;amp;SCa=../cml/CookML_EN&quot;&gt;CookML&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But they are mainly XML-based formats, and no human can read a recipe written in
them. If you&apos;re interested in finding other recipe formats, there is a
&lt;a href=&quot;http://microformats.org/wiki/recipe-formats&quot;&gt;list of software-related cooking formats&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It&apos;s worth mentioning some formats that came with the age of the Internet. HTML
microdata like
&lt;a href=&quot;https://developers.google.com/search/docs/data-types/recipe&quot;&gt;Google rich snippets&lt;/a&gt;
and &lt;a href=&quot;http://schema.org/Recipe&quot;&gt;schema.org microdata&lt;/a&gt; are widely used by
commercial recipe sites. Although microdata&apos;s main objective is
machine-readability (especially for SEO), people still read and follow recipes.&lt;/p&gt;
&lt;p&gt;Finally, I found &lt;a href=&quot;https://6xq.net/pesto/&quot;&gt;pesto&lt;/a&gt;, which aims for a simpler
human-readable format, but I find it difficult to understand for someone who is
unfamiliar with the syntax.&lt;/p&gt;
&lt;h2&gt;Original design considerations&lt;/h2&gt;
&lt;p&gt;The original author of YumML had some considerations in mind during the design
of the format:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Does not need a long reference guide.&lt;/li&gt;
&lt;li&gt;Can be easily read by non-technical people in the &amp;quot;raw&amp;quot; format.&lt;/li&gt;
&lt;li&gt;Can be translatable between imperial and metric.&lt;/li&gt;
&lt;li&gt;Want something like the &lt;em&gt;markdown&lt;/em&gt; of recipes (but still easy to parse with
software).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;YumML is based on YAML to create a human &lt;em&gt;and&lt;/em&gt; system readable format.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;From &lt;a href=&quot;https://yaml.org/&quot;&gt;yaml.org&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;What It Is: YAML is a human friendly data serialization standard for all
programming languages.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Why YAML in the Age of AI&lt;/h2&gt;
&lt;p&gt;When the original YumML spec was drafted in 2011, the consideration was purely
about human and machine readability. Today, there&apos;s a third reader to consider:
&lt;strong&gt;Large Language Models (LLMs)&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;As AI assistants become common in kitchens (think voice assistants reading
recipes aloud, or chatbots helping you adapt recipes), the format you choose for
structured data matters more than ever. Recent research shows that data format
significantly impacts both &lt;strong&gt;token efficiency&lt;/strong&gt; (cost) and &lt;strong&gt;model accuracy&lt;/strong&gt;
when LLMs process structured information.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.improvingagents.com/blog/best-nested-data-format/&quot;&gt;Benchmarks across different models&lt;/a&gt;
show that YAML uses &lt;strong&gt;27-40% fewer tokens than JSON&lt;/strong&gt; and &lt;strong&gt;38-40% fewer than
XML&lt;/strong&gt; for the same data. Beyond cost savings, format also affects comprehension:
YAML achieved &lt;strong&gt;12-18 percentage points higher accuracy&lt;/strong&gt; than JSON when models
extracted information from nested data. The cleaner syntax with less punctuation
noise helps models parse semantic content more reliably. While
&lt;a href=&quot;https://www.curiouslychase.com/posts/yaml-vs-json-for-llm-token-efficiency-the-minification-truth&quot;&gt;minified JSON can be more efficient&lt;/a&gt;,
it sacrifices human readability entirely, which defeats YumML&apos;s core goals.&lt;/p&gt;
&lt;h3&gt;The best of both worlds&lt;/h3&gt;
&lt;p&gt;YAML&apos;s design strikes a balance that works well for this three-way readability
requirement:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Humans&lt;/strong&gt; can read it without training (no angle brackets or curly braces)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Machines&lt;/strong&gt; can parse it with standard libraries in any language&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI models&lt;/strong&gt; process it more efficiently and accurately than alternatives&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This makes YumML particularly well-suited for modern recipe applications where
an AI might help you scale a recipe, suggest substitutions, or convert between
metric and imperial, all while keeping the source format readable in a text
editor.&lt;/p&gt;
&lt;h2&gt;Goals&lt;/h2&gt;
&lt;p&gt;I want to define more formal goals for the spec.&lt;/p&gt;
&lt;p&gt;YumML format:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;MUST&lt;/strong&gt; be human &lt;strong&gt;and&lt;/strong&gt; system readable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MUST&lt;/strong&gt; be self contained, so it &lt;strong&gt;MUST NOT&lt;/strong&gt; require additional resources to
be interpreted.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MUST&lt;/strong&gt; have support for different metric systems &lt;em&gt;(metric, imperial)&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SHOULD&lt;/strong&gt; be easy to translate into different languages &lt;em&gt;(recipes have a
strong cultural influence and language should not be a barrier to someone who
wants to understand)&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SHOULD&lt;/strong&gt; be easy to parse using already existing tools.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SHOULD&lt;/strong&gt; be easy to extend in the future.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Technical Details&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;File extension&lt;/strong&gt;: &lt;code&gt;.yumml&lt;/code&gt; (files &lt;strong&gt;MUST&lt;/strong&gt; be valid YAML)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MIME type&lt;/strong&gt;: &lt;code&gt;application/x-yumml+yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Encoding&lt;/strong&gt;: UTF-8&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Basic Example&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-yumml&quot;&gt;name: Mrs Fields Choc-Chip Cookies
date: 2011-09-21
prepTime: 15 minutes
cookTime: 10 minutes
ingredients:
  - quantity: 2.5
    unit: cups
    item: plain flour

  - quantity: 0.5
    unit: tsp
    item: bicarbonate of soda

instructions:
  - step: Mix flour, bicarbonate of soda, and salt in a large bowl
  - step: Blend sugars with electric mixer, add margarine to form a grainy paste
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Spec&lt;/h2&gt;
&lt;p&gt;There are three main sections that every recipe &lt;strong&gt;MUST&lt;/strong&gt; include: the header,
the ingredients list, and the instructions.&lt;/p&gt;
&lt;h3&gt;Header&lt;/h3&gt;
&lt;p&gt;The header is an implicit section where all the attributes are placed at the
root level of the file. All the attributes &lt;strong&gt;SHOULD&lt;/strong&gt; be placed at the top of
the file, before &lt;code&gt;ingredients&lt;/code&gt; and &lt;code&gt;instructions&lt;/code&gt;.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Attribute&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Type&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Status&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;name&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;REQUIRED&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Name of the recipe&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;date&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Publication date (&lt;a href=&quot;https://tools.ietf.org/html/rfc3339#section-5.6&quot;&gt;RFC 3339&lt;/a&gt; format, e.g., &lt;code&gt;2017-07-21&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;author&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Author of the recipe&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;description&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Brief description of the recipe&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;prepTime&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;duration&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Preparation time (e.g., &lt;code&gt;15 minutes&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;cookTime&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;duration&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Cooking/baking time (e.g., &lt;code&gt;10 minutes&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;totalTime&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;duration&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Total time (&lt;strong&gt;MAY&lt;/strong&gt; be derived from &lt;code&gt;prepTime&lt;/code&gt; + &lt;code&gt;cookTime&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;servings&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;integer&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Number of portions the recipe serves&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;yield&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;What the recipe produces (e.g., &lt;code&gt;24 cookies&lt;/code&gt;, &lt;code&gt;1 loaf&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;rating&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;number (1-5)&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Recipe rating&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;tags&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string[]&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Categorization tags&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;Duration format&lt;/strong&gt;: Human-readable strings. For maximum compatibility, use
integers followed by &lt;code&gt;minutes&lt;/code&gt;, &lt;code&gt;hours&lt;/code&gt;, or &lt;code&gt;seconds&lt;/code&gt; (e.g., &lt;code&gt;15 minutes&lt;/code&gt;,
&lt;code&gt;1 hour 30 minutes&lt;/code&gt;). Parsers &lt;strong&gt;SHOULD&lt;/strong&gt; also accept common abbreviations
(&lt;code&gt;15 min&lt;/code&gt;, &lt;code&gt;1 hr&lt;/code&gt;) and natural variations (&lt;code&gt;1.5 hours&lt;/code&gt;).&lt;/p&gt;
&lt;h4&gt;Example&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-yumml:1-10&quot;&gt;name: Mrs Fields Choc-Chip Cookies
date: 2011-09-21
author: Paul Jenkins
description: Classic chocolate chip cookies, crispy outside and chewy inside.
prepTime: 15 minutes
cookTime: 10 minutes
servings: 4
yield: 24 cookies
tags:
  - cookies
  - chocolate
ingredients:
  - quantity: 2.5
    unit: cups
    item: plain flour

  - quantity: 0.5
    unit: tsp
    item: bicarbonate of soda

instructions:
  - step: Mix flour, bicarbonate of soda, and salt in a large bowl
  - step: Blend sugars with electric mixer, add margarine to form a grainy paste
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Ingredients&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;ingredients&lt;/code&gt; section is a &lt;strong&gt;REQUIRED&lt;/strong&gt; list of all ingredients needed for
the recipe.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Attribute&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Type&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Status&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;quantity&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;number | string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Amount (number, fraction like &lt;code&gt;&amp;quot;1/2&amp;quot;&lt;/code&gt;, or descriptor like &lt;code&gt;&amp;quot;to taste&amp;quot;&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;unit&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Unit of measurement (see canonical units below)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;item&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;REQUIRED&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Name of the ingredient&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;notes&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Additional notes (e.g., &lt;code&gt;room temperature&lt;/code&gt;, &lt;code&gt;finely chopped&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;optional&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;boolean&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Whether the ingredient is optional (default: &lt;code&gt;false&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Canonical Units&lt;/h4&gt;
&lt;p&gt;To support conversion between metric and imperial systems, implementations
&lt;strong&gt;SHOULD&lt;/strong&gt; recognize these canonical unit abbreviations:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Category&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Units&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Volume&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;tsp&lt;/code&gt;, &lt;code&gt;tbsp&lt;/code&gt;, &lt;code&gt;cup&lt;/code&gt;, &lt;code&gt;ml&lt;/code&gt;, &lt;code&gt;l&lt;/code&gt;, &lt;code&gt;fl-oz&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Weight&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;g&lt;/code&gt;, &lt;code&gt;kg&lt;/code&gt;, &lt;code&gt;oz&lt;/code&gt;, &lt;code&gt;lb&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Count&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;em&gt;(omit &lt;code&gt;unit&lt;/code&gt; for countable items like eggs)&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Example&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-yumml&quot;&gt;ingredients:
  # Numeric quantities
  - quantity: 2.5
    unit: cups
    item: plain flour

  - quantity: 200
    unit: g
    item: chocolate chips
    notes: semi-sweet

  # Fraction string (more readable than 0.5)
  - quantity: &amp;quot;1/2&amp;quot;
    unit: cup
    item: walnuts
    notes: chopped
    optional: true

  # Countable items (no unit needed)
  - quantity: 2
    item: eggs
    notes: room temperature

  # Descriptor string for fuzzy amounts
  - quantity: &amp;quot;to taste&amp;quot;
    item: salt

  # No quantity (implied &amp;quot;some&amp;quot;)
  - item: cooking spray
    notes: for greasing
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Instructions&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;instructions&lt;/code&gt; section is a &lt;strong&gt;REQUIRED&lt;/strong&gt; list of steps to prepare the
recipe. Instructions support two formats: a simple flat list or grouped sections
for complex recipes.&lt;/p&gt;
&lt;h4&gt;Simple Format&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Attribute&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Type&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Status&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;step&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;REQUIRED&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;The instruction text&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;duration&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;duration&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Time for this step&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;temperature&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;string&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;OPTIONAL&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Temperature setting (e.g., &lt;code&gt;180C&lt;/code&gt;, &lt;code&gt;350F&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;pre&gt;&lt;code class=&quot;language-yumml&quot;&gt;instructions:
  - step: Preheat oven
    temperature: 180C
  - step: Mix dry ingredients in a large bowl
  - step: Bake until golden brown
    duration: 10 minutes
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Grouped Format&lt;/h4&gt;
&lt;p&gt;For complex recipes with multiple stages, instructions &lt;strong&gt;MAY&lt;/strong&gt; be organized into
sections:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Implementation note&lt;/strong&gt;: Parsers &lt;strong&gt;MUST&lt;/strong&gt; handle both formats. Check for the
presence of &lt;code&gt;section&lt;/code&gt; to determine the format: if any item has a &lt;code&gt;section&lt;/code&gt;
field, treat as grouped; otherwise, treat as simple.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-yumml&quot;&gt;instructions:
  - section: For the dough
    steps:
      - step: Mix flour and salt
      - step: Add water gradually

  - section: For the filling
    steps:
      - step: Sauté onions until translucent
        duration: 5 minutes
      - step: Add remaining filling ingredients
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Design Notes&lt;/h2&gt;
&lt;p&gt;A few decisions deserve explanation.&lt;/p&gt;
&lt;h3&gt;Why &lt;code&gt;quantity&lt;/code&gt; is optional and accepts strings&lt;/h3&gt;
&lt;p&gt;Cooking is fuzzy. Real recipes include instructions like &amp;quot;salt to taste&amp;quot;, &amp;quot;a
pinch of nutmeg&amp;quot;, or &amp;quot;butter for greasing&amp;quot;. Forcing a numeric quantity would
either exclude these cases or push authors toward awkward workarounds like
&lt;code&gt;quantity: 0&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Allowing strings also enables fractions like &lt;code&gt;&amp;quot;1/2&amp;quot;&lt;/code&gt;, which are more readable in
a recipe context than &lt;code&gt;0.5&lt;/code&gt;. The trade-off is that parsers need to handle
multiple types, but this reflects the inherent ambiguity of cooking rather than
fighting it.&lt;/p&gt;
&lt;h3&gt;Why human-readable durations instead of ISO 8601&lt;/h3&gt;
&lt;p&gt;ISO 8601 durations (&lt;code&gt;PT25M&lt;/code&gt;) are unambiguous and machine-friendly, but they fail
the &amp;quot;readable by a non-technical person&amp;quot; goal. A home cook glancing at a
&lt;code&gt;.yumml&lt;/code&gt; file should immediately understand &lt;code&gt;25 minutes&lt;/code&gt; without consulting a
reference.&lt;/p&gt;
&lt;p&gt;The spec recommends a structured subset (&lt;code&gt;integer + unit&lt;/code&gt;) for tools that need
reliable parsing but remains flexible enough to accept natural variations.&lt;/p&gt;
&lt;h3&gt;Why instructions support two formats&lt;/h3&gt;
&lt;p&gt;Simple recipes don&apos;t need sections. Forcing authors to write &lt;code&gt;section: main&lt;/code&gt; for
a basic cookie recipe adds friction. But complex recipes (like a multi-component
pie) genuinely benefit from grouping steps by stage.&lt;/p&gt;
&lt;p&gt;The polymorphic design prioritizes author convenience over parser simplicity.
The implementation note makes the parsing logic explicit: check for &lt;code&gt;section&lt;/code&gt; to
determine the format.&lt;/p&gt;
&lt;h2&gt;Complete Example&lt;/h2&gt;
&lt;p&gt;Here&apos;s a comprehensive example demonstrating all YumML features:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yumml&quot;&gt;name: Classic Apple Pie
date: 2024-03-15
author: Jane Baker
description: A traditional apple pie with a flaky butter crust and cinnamon-spiced filling.
prepTime: 45 minutes
cookTime: 55 minutes
totalTime: 1 hour 40 minutes
servings: 8
yield: 1 pie (9-inch)
rating: 5
tags:
  - dessert
  - pie
  - baking

ingredients:
  - quantity: 2.5
    unit: cups
    item: all-purpose flour

  - quantity: 1
    unit: tsp
    item: salt

  - quantity: 1
    unit: cup
    item: unsalted butter
    notes: cold, cubed

  - quantity: 6
    unit: tbsp
    item: ice water

  - quantity: 6
    item: apples
    notes: peeled and sliced (Granny Smith recommended)

  - quantity: 0.75
    unit: cup
    item: sugar

  - quantity: 2
    unit: tsp
    item: cinnamon

  - quantity: 0.25
    unit: cup
    item: caramel sauce
    optional: true

instructions:
  - section: For the crust
    steps:
      - step: Combine flour and salt in a large bowl
      - step: Cut in cold butter until mixture resembles coarse crumbs
      - step: Add ice water gradually, mixing until dough forms
      - step: Divide dough in half, wrap in plastic, and refrigerate
        duration: 30 minutes

  - section: For the filling
    steps:
      - step: Preheat oven
        temperature: 375F
      - step: Toss sliced apples with sugar and cinnamon
      - step: Roll out bottom crust and place in pie dish

  - section: Assembly
    steps:
      - step: Add apple filling to crust
      - step: Roll out top crust and place over filling
      - step: Crimp edges and cut vents in top
      - step: Bake until golden brown and bubbling
        duration: 55 minutes
        temperature: 375F
      - step: Cool before serving
        duration: 30 minutes
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;What&apos;s Next&lt;/h2&gt;
&lt;p&gt;This spec is a starting point—to become useful, YumML needs implementations. If
you&apos;re interested in contributing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Write a parser&lt;/strong&gt; in your favorite language (TypeScript, Python, Go, Rust...)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Build a converter&lt;/strong&gt; from &lt;a href=&quot;http://schema.org/Recipe&quot;&gt;schema.org/Recipe&lt;/a&gt; to
YumML&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create a VS Code extension&lt;/strong&gt; with syntax highlighting and validation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Experiment with LLMs&lt;/strong&gt; to see how well they can generate and parse YumML&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you build something, I&apos;d love to hear about it.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Interested in more open-source projects? Check out
&lt;a href=&quot;https://magarcia.io/cross-platform-secret-storage-with-cross-keychain/&quot;&gt;cross-keychain&lt;/a&gt;, a
cross-platform secret storage library, or see how I used AI to design the
&lt;a href=&quot;https://magarcia.io/air-gapped-webrtc-breaking-the-qr-limit/&quot;&gt;QWBP protocol&lt;/a&gt; for serverless
WebRTC.&lt;/em&gt;&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Writing Powerful Claude Code Skills with npx bun]]></title>
            <description><![CDATA[Learn how to write Claude Code skills with third-party npm packages and no build step. Using npx bun provides auto-installing dependencies, native TypeScript support, and clean version pinning in imports—the JavaScript equivalent of Python's uv run for self-contained scripts.]]></description>
            <link>https://magarcia.io/writing-powerful-claude-code-skills-with-npx-bun/</link>
            <guid isPermaLink="false">https://magarcia.io/writing-powerful-claude-code-skills-with-npx-bun/</guid>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[node-js]]></category>
            <category><![CDATA[bun]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[cli]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Wed, 07 Jan 2026 00:00:00 GMT</pubDate>
            <content:encoded>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Update (January 15, 2026):&lt;/strong&gt; After more testing, I&apos;ve found Bun&apos;s
&lt;code&gt;node_modules&lt;/code&gt; detection can break auto-install in unexpected situations. I
now recommend Deno for this use case. The approach below still works for
standalone scripts, but see
&lt;a href=&quot;https://magarcia.io/why-i-switched-from-bun-to-deno-for-claude-code-skills/&quot;&gt;Why I Switched from Bun to Deno for Claude Code Skills&lt;/a&gt;
for the full breakdown.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Claude Code skills&lt;/strong&gt; let you extend Anthropic&apos;s agentic coding CLI with custom
instructions and executable scripts. But what if your script needs third-party
npm packages like &lt;strong&gt;lodash&lt;/strong&gt;, &lt;strong&gt;zod&lt;/strong&gt;, or &lt;strong&gt;csv-parse&lt;/strong&gt;? Without a build step or
&lt;code&gt;node_modules&lt;/code&gt; folder, imports fail. This guide shows how to use &lt;code&gt;npx bun&lt;/code&gt; to
write self-contained TypeScript skills that auto-install dependencies at
runtime—no &lt;code&gt;package.json&lt;/code&gt; required.&lt;/p&gt;
&lt;p&gt;The problem:
&lt;a href=&quot;https://docs.anthropic.com/en/docs/claude-code/skills&quot;&gt;Claude Code skills&lt;/a&gt; have
no build step. You cannot just &lt;code&gt;import lodash from &apos;lodash&apos;&lt;/code&gt; and expect it to
work. The script runs in a fresh environment with no &lt;code&gt;node_modules&lt;/code&gt; folder.&lt;/p&gt;
&lt;h2&gt;What I Wanted: uv run for JavaScript&lt;/h2&gt;
&lt;p&gt;Python solves this elegantly with &lt;a href=&quot;https://docs.astral.sh/uv/&quot;&gt;uv&lt;/a&gt;. You declare
dependencies inline using &lt;a href=&quot;https://peps.python.org/pep-0723/&quot;&gt;PEP 723&lt;/a&gt; metadata:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;#!/usr/bin/env -S uv run
# /// script
# dependencies = [&amp;quot;requests&amp;quot;, &amp;quot;rich&amp;quot;]
# requires-python = &amp;quot;&amp;gt;=3.10&amp;quot;
# ///

import requests
from rich import print

print(requests.get(&amp;quot;https://example.com&amp;quot;))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run it with &lt;code&gt;uv run script.py&lt;/code&gt; or just &lt;code&gt;./script.py&lt;/code&gt;. Dependencies install
automatically into an isolated environment. No &lt;code&gt;requirements.txt&lt;/code&gt;, no virtual
environment management, no build step. The script is self-contained.&lt;/p&gt;
&lt;p&gt;Python was my first professional programming language, and I still admire how
its ecosystem has evolved. But my team at &lt;a href=&quot;https://buffer.com&quot;&gt;&lt;strong&gt;Buffer&lt;/strong&gt;&lt;/a&gt; works
in TypeScript. I needed something that would be easy for my teammates to pick
up—familiar npm packages, familiar syntax—but as flexible and powerful as
&lt;code&gt;uv run&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;The Constraint&lt;/h2&gt;
&lt;p&gt;A typical Claude Code skill looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;my-skill/
├── SKILL.md
└── scripts/
    └── process.js
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Claude reads &lt;code&gt;SKILL.md&lt;/code&gt; for instructions and executes the scripts when relevant.
But if &lt;code&gt;process.js&lt;/code&gt; imports any npm package, it fails. No &lt;code&gt;package.json&lt;/code&gt;, no
&lt;code&gt;node_modules&lt;/code&gt;, no dependencies.&lt;/p&gt;
&lt;p&gt;The obvious solutions—committing &lt;code&gt;node_modules&lt;/code&gt; or running &lt;code&gt;npm install&lt;/code&gt; at
runtime—are ugly. The first bloats your skill folder. The second adds latency
every time.&lt;/p&gt;
&lt;p&gt;I explored several approaches.&lt;/p&gt;
&lt;h2&gt;Approach 1: Pre-bundling with esbuild&lt;/h2&gt;
&lt;p&gt;Bundle your script into a single file with all dependencies included:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;esbuild script.ts --bundle --platform=node --outfile=script.mjs
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output is self-contained. No runtime dependencies. Ship it with your skill
and Claude runs it directly.&lt;/p&gt;
&lt;p&gt;This works, but requires a build step. You must rebuild after every change.
Debugging bundled code is harder. It&apos;s the opposite of what I wanted.&lt;/p&gt;
&lt;h2&gt;Approach 2: Dynamic imports with esm.sh&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://esm.sh&quot;&gt;esm.sh&lt;/a&gt; serves npm packages as ES modules over HTTPS:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;import _ from &amp;quot;https://esm.sh/lodash@4.17.21&amp;quot;;
import { z } from &amp;quot;https://esm.sh/zod@3.23.0&amp;quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;No installation needed. The runtime fetches the module on first use. Version
pinning lives in the URL.&lt;/p&gt;
&lt;p&gt;The problem: Node.js doesn&apos;t natively support HTTPS imports without experimental
flags or custom loaders. Some packages don&apos;t work well as pure ESM. Network
latency on every cold start adds up.&lt;/p&gt;
&lt;h2&gt;Approach 3: Google &lt;code&gt;zx&lt;/code&gt; with &lt;code&gt;--install&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://google.github.io/zx/&quot;&gt;zx&lt;/a&gt; is Google&apos;s tool for writing shell scripts in
JavaScript. It wraps &lt;code&gt;child_process&lt;/code&gt; and adds conveniences like the &lt;code&gt;$&lt;/code&gt; template
literal for running commands.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;--install&lt;/code&gt; flag auto-installs missing dependencies:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;#!/usr/bin/env zx
import _ from &amp;quot;lodash&amp;quot;; // @^4.17
import { parse } from &amp;quot;yaml&amp;quot;; // @^2.0

await $`echo &amp;quot;Dependencies auto-installed&amp;quot;`;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run it with &lt;code&gt;npx zx --install script.mjs&lt;/code&gt;. On first run, &lt;strong&gt;zx&lt;/strong&gt; detects the
imports, installs the packages, and caches them.&lt;/p&gt;
&lt;p&gt;This gets closer to what I wanted. But version pinning through comments feels
hacky. And there&apos;s no native TypeScript support—you&apos;d need
&lt;a href=&quot;https://github.com/privatenumber/tsx&quot;&gt;tsx&lt;/a&gt; or similar.&lt;/p&gt;
&lt;h2&gt;Approach 4: Bun&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://bun.sh&quot;&gt;Bun&lt;/a&gt; takes a different approach. Auto-install is built into the
runtime. Write normal imports and Bun handles the rest:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;#!/usr/bin/env bun
import _ from &amp;quot;lodash&amp;quot;;
import { z } from &amp;quot;zod@^3.20&amp;quot;;
import chalk from &amp;quot;chalk@^5.0.0&amp;quot;;

console.log(chalk.green(&amp;quot;Dependencies just work&amp;quot;));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Version pinning happens directly in the import path—cleaner than &lt;strong&gt;zx&lt;/strong&gt;&apos;s
comment syntax. TypeScript runs natively. Startup is fast.&lt;/p&gt;
&lt;p&gt;The catch: &lt;strong&gt;Bun&lt;/strong&gt; might not be installed in every environment. Claude Code
environments have Node.js and npm, but not necessarily Bun.&lt;/p&gt;
&lt;h2&gt;The Discovery: npx bun&lt;/h2&gt;
&lt;p&gt;Then I realized: I don&apos;t need Bun installed globally. I just need npm.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npx -y bun script.ts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;-y&lt;/code&gt; flag skips the confirmation prompt, which matters for non-interactive
execution. This works because &lt;code&gt;bun&lt;/code&gt; is published as an npm package. When you run
&lt;code&gt;npx bun&lt;/code&gt;, npm downloads the Bun binary and executes your script. You get Bun&apos;s
auto-install, TypeScript support, and speed—all through the npm/Node.js
toolchain that&apos;s already everywhere.&lt;/p&gt;
&lt;p&gt;I tested this in a fresh environment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;import chalk from &amp;quot;chalk@^5.0.0&amp;quot;;
import { z } from &amp;quot;zod@3.23.0&amp;quot;;
import _ from &amp;quot;lodash@^4.17.0&amp;quot;;

console.log(chalk.green(&amp;quot;✓ chalk loaded&amp;quot;));

const schema = z.object({ name: z.string() });
console.log(chalk.blue(`✓ zod loaded - validation works`));

const grouped = _.groupBy([&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;], &amp;quot;length&amp;quot;);
console.log(chalk.yellow(`✓ lodash loaded`));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;✓ chalk loaded
✓ zod loaded - validation works
✓ lodash loaded
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;No &lt;code&gt;package.json&lt;/code&gt;. No &lt;code&gt;node_modules&lt;/code&gt;. No build step. The first run installs
dependencies to Bun&apos;s global cache. Subsequent runs are instant.&lt;/p&gt;
&lt;p&gt;This is the JavaScript equivalent of &lt;code&gt;uv run&lt;/code&gt;. Same developer experience, same
self-contained scripts, familiar npm ecosystem.&lt;/p&gt;
&lt;h3&gt;Making Scripts Directly Executable&lt;/h3&gt;
&lt;p&gt;It gets better. Just like Python&apos;s &lt;code&gt;#!/usr/bin/env -S uv run&lt;/code&gt;, you can use a
shebang to make scripts directly executable:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;#!/usr/bin/env -S npx -y bun

import chalk from &amp;quot;chalk@^5.0.0&amp;quot;;

console.log(chalk.green(&amp;quot;Hello!&amp;quot;));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;-S&lt;/code&gt; flag tells &lt;code&gt;env&lt;/code&gt; to split the string into separate arguments. Make the
script executable and run it directly:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;chmod +x script.ts
./script.ts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you have self-contained TypeScript scripts—no explicit invocation needed.&lt;/p&gt;
&lt;h2&gt;Using This in Claude Code Skills&lt;/h2&gt;
&lt;p&gt;Structure your skill like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;my-skill/
├── SKILL.md
└── scripts/
    └── process.ts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In &lt;code&gt;SKILL.md&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;---
name: data-processor
description: Process and transform data files using advanced libraries
allowed-tools: [Bash, Read, Write]
---

# Data Processor

Run the processing script:

```bash
./scripts/process.ts &amp;lt;input-file&amp;gt;
```
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In &lt;code&gt;scripts/process.ts&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;import { parse } from &amp;quot;csv-parse/sync@^5.0&amp;quot;;
import * as XLSX from &amp;quot;xlsx@^0.20&amp;quot;;

const [, , inputPath] = Bun.argv;
const file = Bun.file(inputPath);
const content = await file.text();

const rows = parse(content, { columns: true });
console.log(JSON.stringify(rows, null, 2));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Claude runs the skill, the script executes with full access to npm packages, and
you never touch a &lt;code&gt;package.json&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Comparison&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Build Step&lt;/th&gt;
&lt;th&gt;TypeScript&lt;/th&gt;
&lt;th&gt;Version Pinning&lt;/th&gt;
&lt;th&gt;First-run Speed&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://esbuild.github.io/&quot;&gt;esbuild&lt;/a&gt; bundle&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Via build&lt;/td&gt;
&lt;td&gt;In source&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://esm.sh&quot;&gt;esm.sh&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;In URL&lt;/td&gt;
&lt;td&gt;Network-bound&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://google.github.io/zx/&quot;&gt;npx zx --install&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Via tsx&lt;/td&gt;
&lt;td&gt;Comments&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;npx -y bun&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;In import path&lt;/td&gt;
&lt;td&gt;Fast after cache&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Caveats&lt;/h2&gt;
&lt;p&gt;This approach isn&apos;t perfect. A few things to consider:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Auto-install requires no node_modules directory.&lt;/strong&gt; Bun&apos;s auto-install feature
only works when no &lt;code&gt;node_modules&lt;/code&gt; directory is found in the working directory or
any parent directory.[^1] When a &lt;code&gt;node_modules&lt;/code&gt; folder exists—common in
monorepos or existing projects—Bun switches to regular Node.js module resolution
instead of its auto-install algorithm. Even the &lt;code&gt;--install=force&lt;/code&gt; flag doesn&apos;t
fully solve this: version specifiers in imports (like
&lt;code&gt;import { z } from &amp;quot;zod@3.0.0&amp;quot;&lt;/code&gt;) will throw a &lt;code&gt;VersionSpecifierNotAllowedHere&lt;/code&gt;
error when &lt;code&gt;node_modules&lt;/code&gt; is present. This means the approach works best for
standalone scripts outside of existing projects. For Claude Code skills stored
in &lt;code&gt;~/.claude/skills/&lt;/code&gt;, this typically isn&apos;t an issue. But if you&apos;re writing
scripts inside a project directory with &lt;code&gt;node_modules&lt;/code&gt;, you&apos;ll need to either
use a traditional &lt;code&gt;package.json&lt;/code&gt; or move the script outside the project tree.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Bun is not fully Node.js compatible.&lt;/strong&gt; Most npm packages work fine, but some
Node.js APIs behave differently or aren&apos;t implemented yet. If your script
depends on edge-case Node.js behavior—certain &lt;code&gt;fs&lt;/code&gt; operations, specific
&lt;code&gt;child_process&lt;/code&gt; options, native addons—you might hit unexpected issues. Check
&lt;a href=&quot;https://bun.sh/docs/runtime/nodejs-apis&quot;&gt;Bun&apos;s Node.js compatibility documentation&lt;/a&gt;
before committing to this approach.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;First-run latency still exists.&lt;/strong&gt; The first execution downloads Bun via npx
(~100MB depending on architecture) and installs dependencies. On a slow
connection or in a cold-start environment, this adds noticeable time. Subsequent
runs are fast, but that initial hit matters if your skill runs in ephemeral
environments that don&apos;t preserve Bun&apos;s cache.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Version pinning in imports is non-standard.&lt;/strong&gt; The &lt;code&gt;import x from &amp;quot;pkg@^1.0&amp;quot;&lt;/code&gt;
syntax is Bun-specific. Your IDE won&apos;t understand it for autocompletion or type
checking. For quick scripts, you can add &lt;code&gt;// @ts-ignore&lt;/code&gt; above the problematic
imports. For more serious development, maintain a &lt;code&gt;package.json&lt;/code&gt; with proper
versions and only use the inline syntax in the deployed skill.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When to use zx instead.&lt;/strong&gt; If you need guaranteed Node.js compatibility—because
you&apos;re using a package that relies on Node-specific internals, or your team has
strict runtime
requirements—&lt;a href=&quot;https://google.github.io/zx/cli#install&quot;&gt;zx with --install&lt;/a&gt; is the
safer choice. It runs on Node.js directly, so compatibility is never a question.
The trade-off is no native TypeScript and the comment-based version pinning.&lt;/p&gt;
&lt;p&gt;For most skills that use common packages like &lt;strong&gt;lodash&lt;/strong&gt;, &lt;strong&gt;zod&lt;/strong&gt;, or
&lt;strong&gt;csv-parse&lt;/strong&gt;, Bun works fine. But know the escape hatch exists.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;npx -y bun&lt;/code&gt; combines the best properties: no build step, native TypeScript,
clean version pinning, and availability anywhere npm runs. For Claude Code
skills that need third-party packages, it&apos;s the simplest path to powerful
scripts—as long as you stay within Bun&apos;s compatibility boundaries.&lt;/p&gt;
&lt;p&gt;If you&apos;ve used Python&apos;s &lt;strong&gt;uv&lt;/strong&gt; and wished JavaScript had something similar, this
is it. Same philosophy, same workflow, familiar tools. And when you hit Bun&apos;s
edges, &lt;strong&gt;zx&lt;/strong&gt; is there as a fallback.&lt;/p&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;p&gt;[^1]:
&lt;a href=&quot;https://bun.sh/docs/runtime/auto-install&quot;&gt;Bun Auto-Install Documentation&lt;/a&gt; —
&amp;quot;If no node_modules directory is found in the working directory or higher,
Bun will abandon Node.js-style module resolution in favor of the Bun module
resolution algorithm.&amp;quot;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.anthropic.com/en/docs/claude-code/skills&quot;&gt;Claude Code Skills Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://bun.sh&quot;&gt;Bun&lt;/a&gt; — The JavaScript runtime with built-in auto-install&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://bun.sh/docs/runtime/module-resolution&quot;&gt;Bun Module Resolution&lt;/a&gt; —
Understanding how Bun resolves modules&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://google.github.io/zx/&quot;&gt;Google zx&lt;/a&gt; — A tool for writing better scripts&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://esm.sh&quot;&gt;esm.sh&lt;/a&gt; — npm packages as ES modules over CDN&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.astral.sh/uv/&quot;&gt;uv&lt;/a&gt; — Python&apos;s package manager with inline script
dependencies (&lt;code&gt;uv run&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://peps.python.org/pep-0723/&quot;&gt;PEP 723&lt;/a&gt; — Inline script metadata
specification for Python&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://esbuild.github.io/&quot;&gt;esbuild&lt;/a&gt; — Fast JavaScript bundler&lt;/li&gt;
&lt;/ul&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Building granola-cli: AI Meeting Notes in Your Terminal]]></title>
            <description><![CDATA[granola-cli brings your Granola.ai meeting notes to the command line. Query meetings with Claude Code, grep transcripts, export to JSON/TOON, and pipe action items into your workflows. Built with secure keychain storage and natural language date filtering.]]></description>
            <link>https://magarcia.io/reverse-engineered-meeting-notes-into-terminal/</link>
            <guid isPermaLink="false">https://magarcia.io/reverse-engineered-meeting-notes-into-terminal/</guid>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[node-js]]></category>
            <category><![CDATA[cli]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Mon, 22 Dec 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; This project is an independent open-source tool and is not
affiliated with, endorsed by, or connected to Granola.ai.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;What if you could query your meeting history like a database? &lt;strong&gt;granola-cli&lt;/strong&gt;
brings your Granola.ai meeting notes to the command line—grep transcripts,
export to JSON, and pipe action items directly into Claude Code or your task
manager.&lt;/p&gt;
&lt;h2&gt;What is Granola?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.granola.ai/&quot;&gt;Granola&lt;/a&gt; is an AI meeting assistant that sits in your
menu bar. It records system audio, transcribes locally, and uses LLMs to produce
structured summaries—key decisions, action items, discussion points. No bots
joining your calls, no awkward &amp;quot;Granola is recording&amp;quot; notifications to your
teammates. At &lt;a href=&quot;https://buffer.com&quot;&gt;&lt;strong&gt;Buffer&lt;/strong&gt;&lt;/a&gt;, it has become a daily driver for
our team.&lt;/p&gt;
&lt;p&gt;The notes it generates are excellent. But I hit a ceiling.&lt;/p&gt;
&lt;h2&gt;The Problem I Needed to Solve&lt;/h2&gt;
&lt;p&gt;I wanted raw data in my terminal. I wanted to feed last week&apos;s engineering syncs
into &lt;strong&gt;Claude Code&lt;/strong&gt; to plan my next sprint. I wanted to grep through
transcripts or pipe action items directly into my task manager.&lt;/p&gt;
&lt;p&gt;Copy-pasting from a web UI fell short. I found
&lt;a href=&quot;https://josephthacker.com/hacking/2025/05/08/reverse-engineering-granola-notes.html&quot;&gt;Joseph Thacker&apos;s research&lt;/a&gt;
on reverse engineering the Granola API, plus the
&lt;a href=&quot;https://github.com/getprobo/reverse-engineering-granola-api&quot;&gt;getprobo/reverse-engineering-granola-api&lt;/a&gt;
repo. The groundwork existed. A proper CLI for daily use did not.&lt;/p&gt;
&lt;p&gt;So I built one.&lt;/p&gt;
&lt;h2&gt;How I Mapped Granola&apos;s API&lt;/h2&gt;
&lt;p&gt;The Granola desktop app stores authentication tokens in a local JSON file. The
CLI reads these credentials, stores them securely in your system keychain via
&lt;strong&gt;cross-keychain&lt;/strong&gt; (which I
&lt;a href=&quot;https://magarcia.io/cross-platform-secret-storage-with-cross-keychain/&quot;&gt;wrote about previously&lt;/a&gt;),
and uses them to call Granola&apos;s internal APIs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key endpoints I mapped:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;POST /v2/get-documents&lt;/code&gt; — list meetings with cursor pagination&lt;/li&gt;
&lt;li&gt;&lt;code&gt;POST /v1/get-document-metadata&lt;/code&gt; — notes and participant data&lt;/li&gt;
&lt;li&gt;&lt;code&gt;POST /v1/get-document-transcript&lt;/code&gt; — transcript segments with speaker
detection&lt;/li&gt;
&lt;li&gt;&lt;code&gt;POST /v2/get-document-lists&lt;/code&gt; — folders and workspace organization&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Query, Filter, and Export Your Meetings&lt;/h2&gt;
&lt;h3&gt;List and Filter&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Recent meetings
granola meeting list --limit 10

# Filter by date (natural language supported)
granola meeting list --date yesterday
granola meeting list --since &amp;quot;last week&amp;quot;
granola meeting list --since 2025-12-01 --until 2025-12-15

# Filter by attendee or search
granola meeting list --attendee &amp;quot;john@example.com&amp;quot;
granola meeting list --search &amp;quot;quarterly planning&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;View Content&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Full meeting details with participants
granola meeting view &amp;lt;id&amp;gt;

# Your handwritten notes (converted from ProseMirror to Markdown)
granola meeting notes &amp;lt;id&amp;gt;

# AI-generated summary with key decisions and action items
granola meeting enhanced &amp;lt;id&amp;gt;

# Full transcript with speaker detection
granola meeting transcript &amp;lt;id&amp;gt;
granola meeting transcript &amp;lt;id&amp;gt; --timestamps
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Export for Pipelines&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Export everything about a meeting
granola meeting export &amp;lt;id&amp;gt; --format json
granola meeting export &amp;lt;id&amp;gt; --format toon

# Pipe to LLMs
granola meeting enhanced &amp;lt;id&amp;gt; --output toon | llm &amp;quot;What were the action items?&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;40% Fewer Tokens with TOON&lt;/h2&gt;
&lt;p&gt;The CLI supports &lt;a href=&quot;https://toonformat.dev/&quot;&gt;TOON&lt;/a&gt; (Token-Oriented Object
Notation), a format designed for LLM consumption. TOON delivers the same
structured data as JSON using &lt;strong&gt;40% fewer tokens&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;When you pipe meeting data to Claude or another model, every token saved means
more context for your question. TOON determines whether one meeting or three
fits your context window.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ granola meeting export abc123 --format json | wc -c
  15234

$ granola meeting export abc123 --format toon | wc -c
  9140
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Same data. 40% smaller. That&apos;s the difference between asking a follow-up
question or hitting your context limit.&lt;/p&gt;
&lt;h2&gt;Turning Meeting History into AI Context&lt;/h2&gt;
&lt;p&gt;This is why I built it: to empower my AI agents.&lt;/p&gt;
&lt;p&gt;I use &lt;strong&gt;Claude Code&lt;/strong&gt; heavily. With &lt;strong&gt;granola-cli&lt;/strong&gt; installed, I can ask Claude
to analyze my meetings directly:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;You: Check my engineering syncs from last week and list any blockers mentioned.

Claude: I&apos;ll query your recent meetings using granola-cli.

[Runs: granola meeting list --since &amp;quot;last week&amp;quot; --search &amp;quot;sync&amp;quot;]

Found 3 engineering syncs. Analyzing transcripts...

Blockers mentioned:
1. CI pipeline flakiness blocking the release (Dec 18 sync)
2. Waiting on design review for the dashboard redesign (Dec 19 sync)
3. API rate limiting issues with the third-party integration (Dec 20 sync)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;No copy-pasting. No switching windows. Claude reads the data directly and gives
me answers.&lt;/p&gt;
&lt;p&gt;I&apos;ve also built &lt;a href=&quot;https://agentskills.io/home&quot;&gt;Agent Skills&lt;/a&gt; that check meeting
history, summarize decisions, and surface blockers from previous syncs. Your
meeting history becomes a queryable database for your AI workflow.&lt;/p&gt;
&lt;h2&gt;Under the Hood&lt;/h2&gt;
&lt;h3&gt;Secure Credential Storage&lt;/h3&gt;
&lt;p&gt;I refuse to store API tokens in plaintext config files. Too many CLI tools dump
secrets into &lt;code&gt;~/.config/toolname/credentials.json&lt;/code&gt;. One accidental &lt;code&gt;git add .&lt;/code&gt;
or misconfigured backup exposes your tokens.&lt;/p&gt;
&lt;p&gt;The CLI stores credentials via &lt;strong&gt;cross-keychain&lt;/strong&gt; in your OS&apos;s native credential
manager—macOS Keychain, Windows Credential Manager, or Linux Secret Service.
These systems encrypt secrets at rest, integrate with your login session, and
follow platform security best practices. Your Granola tokens never touch the
filesystem in readable form.&lt;/p&gt;
&lt;h3&gt;Token Refresh with File Locking&lt;/h3&gt;
&lt;p&gt;Granola uses single-use refresh tokens—each token works once before
invalidation. This improves security but creates a race condition: if two CLI
processes refresh simultaneously, one gets a valid token while the other wastes
the refresh token and fails.&lt;/p&gt;
&lt;p&gt;The CLI solves this with file-based locking. Before refreshing, the process
acquires an exclusive lock on a temp file. If another process is already
refreshing, the second waits (30-second timeout) rather than racing. The lock
releases immediately after refresh completes, so parallel CLI invocations work
smoothly—they take turns when needed.&lt;/p&gt;
&lt;h3&gt;ProseMirror to Markdown Conversion&lt;/h3&gt;
&lt;p&gt;Granola stores notes in ProseMirror format—the same rich-text framework Notion,
The New York Times, and Atlassian use. It represents content as a JSON tree of
nodes with marks (formatting) attached.&lt;/p&gt;
&lt;p&gt;The CLI walks this tree, converting it to Markdown. Headings become &lt;code&gt;#&lt;/code&gt; lines,
lists become &lt;code&gt;-&lt;/code&gt; items, text marks become their Markdown equivalents: bold wraps
in &lt;code&gt;**&lt;/code&gt;, italic in &lt;code&gt;*&lt;/code&gt;, code in backticks. The conversion preserves nested
structures, so a bulleted list inside a blockquote renders correctly. The
result: readable Markdown you can pipe to other tools, search with grep, or feed
to an LLM.&lt;/p&gt;
&lt;h3&gt;Natural Language Date Parsing&lt;/h3&gt;
&lt;p&gt;Nobody types ISO dates willingly. The CLI accepts &amp;quot;today&amp;quot;, &amp;quot;yesterday&amp;quot;, &amp;quot;3 days
ago&amp;quot;, &amp;quot;last week&amp;quot;, or partial dates like &amp;quot;Dec 1&amp;quot;. For ranges, combine &lt;code&gt;--since&lt;/code&gt;
and &lt;code&gt;--until&lt;/code&gt; with any format. The parser handles the rest.&lt;/p&gt;
&lt;p&gt;The parser normalizes input, handles edge cases (what does &amp;quot;last week&amp;quot; mean on a
Monday?), and returns UTC timestamps matching Granola&apos;s API expectations. The
common case—&amp;quot;show me yesterday&apos;s meetings&amp;quot;—becomes a single intuitive flag.&lt;/p&gt;
&lt;h2&gt;4 Hours with Claude Code Opus 4.5&lt;/h2&gt;
&lt;p&gt;I built this tool in about &lt;strong&gt;4 to 5 hours&lt;/strong&gt; pairing with &lt;strong&gt;Claude Code Opus
4.5&lt;/strong&gt;. I focused on architecture and intent while Claude handled implementation.
The result: a production-ready CLI with strict TypeScript, 95%+ test coverage
across 630+ test cases, and modular design—all in a single afternoon.&lt;/p&gt;
&lt;p&gt;This is &amp;quot;vibe engineering&amp;quot; in practice. I skipped the lengthy planning phase,
described what I wanted, reviewed the output, and iterated quickly.&lt;/p&gt;
&lt;h2&gt;Get Started&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Install
npm install -g granola-cli

# Login (reads credentials from your Granola desktop app)
granola auth login

# List your meetings
granola meeting list
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The source is at
&lt;a href=&quot;https://github.com/magarcia/granola-cli&quot;&gt;github.com/magarcia/granola-cli&lt;/a&gt;.
Issues and PRs welcome.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Related posts:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://magarcia.io/cross-platform-secret-storage-with-cross-keychain/&quot;&gt;Cross-Platform Secret Storage in Node.js with cross-keychain&lt;/a&gt; -
The library granola-cli uses for secure credential storage&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://magarcia.io/asking-ai-to-build-the-tool-instead-of-doing-the-task/&quot;&gt;Asking AI to Build the Tool Instead of Doing the Task&lt;/a&gt; -
How I approach AI-assisted development&lt;/li&gt;
&lt;/ul&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Cross-Platform Secret Storage in Node.js with cross-keychain]]></title>
            <description><![CDATA[Stop storing API keys in .env files. cross-keychain is a TypeScript library that uses your OS's native credential manager (macOS Keychain, Windows Credential Manager, Linux Secret Service) to store secrets securely. One API, zero plaintext files.]]></description>
            <link>https://magarcia.io/cross-platform-secret-storage-with-cross-keychain/</link>
            <guid isPermaLink="false">https://magarcia.io/cross-platform-secret-storage-with-cross-keychain/</guid>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[node-js]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Wed, 01 Oct 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;Stop storing API keys in plaintext &lt;code&gt;.env&lt;/code&gt; files. &lt;strong&gt;cross-keychain&lt;/strong&gt; is a
TypeScript library that uses your OS&apos;s native credential manager to store
secrets securely—macOS Keychain, Windows Credential Manager, or Linux Secret
Service. One API, &lt;em&gt;zero plaintext config files&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Born from &lt;strong&gt;mcp-tool-selector&lt;/strong&gt; (&lt;em&gt;still work in progress&lt;/em&gt;), where I needed to
manage API keys for multiple MCP servers without scattering secrets across
&lt;code&gt;.env&lt;/code&gt; files — or worse, committing them to repos. It became a solid
cross-platform utility, so I published it.&lt;/p&gt;
&lt;h2&gt;At a glance&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Works on &lt;strong&gt;macOS, Windows, and Linux&lt;/strong&gt; with native backend support&lt;/li&gt;
&lt;li&gt;Provides both &lt;strong&gt;programmatic API&lt;/strong&gt; and &lt;strong&gt;CLI&lt;/strong&gt; for storing/retrieving secrets&lt;/li&gt;
&lt;li&gt;Automatic fallback when native modules aren&apos;t available&lt;/li&gt;
&lt;li&gt;Zero deps on the public API, TS-first, Node 18+, ESM/CJS&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Docs &amp;amp; API:&lt;/strong&gt; read the
&lt;a href=&quot;https://github.com/magarcia/cross-keychain&quot;&gt;GitHub repo&lt;/a&gt; and the
&lt;a href=&quot;https://www.npmjs.com/package/cross-keychain&quot;&gt;npm package page&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Quick taste: store &amp;amp; retrieve secrets&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Programmatic usage:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;import { setPassword, getPassword } from &amp;quot;cross-keychain&amp;quot;;

// Store a secret
await setPassword(&amp;quot;myapp&amp;quot;, &amp;quot;api-token&amp;quot;, &amp;quot;sk-1234567890&amp;quot;);

// Retrieve it later
const token = await getPassword(&amp;quot;myapp&amp;quot;, &amp;quot;api-token&amp;quot;);
console.log(token); // &amp;quot;sk-1234567890&amp;quot;

// Delete when done
await deletePassword(&amp;quot;myapp&amp;quot;, &amp;quot;api-token&amp;quot;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;CLI usage:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Store a secret
npx cross-keychain set myapp api-token

# Retrieve it
npx cross-keychain get myapp api-token

# Delete it
npx cross-keychain delete myapp api-token
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Why this matters&lt;/h2&gt;
&lt;p&gt;Storing secrets in plaintext &lt;code&gt;.env&lt;/code&gt; or config files is common but risky. You
must remember to &lt;code&gt;.gitignore&lt;/code&gt; them, rotate them when they leak, and manage them
across environments. &lt;em&gt;Native OS credential stores&lt;/em&gt; handle this—encrypted at
rest, access-controlled, and integrated with your system.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;cross-keychain&lt;/code&gt; provides a consistent API across platforms: write once, let the
OS handle the heavy lifting.&lt;/p&gt;
&lt;p&gt;My third AI-engineered project (after
&lt;a href=&quot;https://github.com/magarcia/mcp-server-giphy&quot;&gt;mcp-server-giphy&lt;/a&gt; and
&lt;a href=&quot;https://magarcia.io/stop-sprinkling-process-env-everywhere/&quot;&gt;env-interpolation&lt;/a&gt;), built with
multiple AI agents. Tired of managing plaintext secrets? This simplifies
everything.&lt;/p&gt;
&lt;h2&gt;When to Use cross-keychain&lt;/h2&gt;
&lt;p&gt;This library is ideal for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;CLI tools&lt;/strong&gt; that need to store API tokens between sessions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Development environments&lt;/strong&gt; where you want secure credential storage&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Local applications&lt;/strong&gt; that authenticate with external services&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Any Node.js app&lt;/strong&gt; that currently uses &lt;code&gt;.env&lt;/code&gt; files for secrets&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It&apos;s not suitable for server-side applications in production—those should use
dedicated secret managers like HashiCorp Vault or cloud provider solutions.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;If you&apos;re building Node.js tools that handle credentials, stop relying on
plaintext files. &lt;strong&gt;cross-keychain&lt;/strong&gt; gives you secure, native storage with
minimal API surface. Your users&apos; secrets deserve better than &lt;code&gt;credentials.json&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Related posts:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://magarcia.io/stop-sprinkling-process-env-everywhere/&quot;&gt;Enable environment variables in your configs with env-interpolation&lt;/a&gt; -
Variable interpolation for config files&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://magarcia.io/reverse-engineered-meeting-notes-into-terminal/&quot;&gt;I Reverse Engineered My Meeting Notes into the Terminal&lt;/a&gt; -
A CLI that uses cross-keychain for secure credential storage&lt;/li&gt;
&lt;/ul&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Stop Sprinkling process.env Everywhere: Use env-interpolation]]></title>
            <description><![CDATA[Tired of process.env scattered across your codebase? env-interpolation is a TypeScript library that resolves ${VAR} placeholders in config objects with support for default values and nested resolution. Clean configs, zero dependencies.]]></description>
            <link>https://magarcia.io/stop-sprinkling-process-env-everywhere/</link>
            <guid isPermaLink="false">https://magarcia.io/stop-sprinkling-process-env-everywhere/</guid>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[node-js]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Mon, 29 Sep 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;Tired of &lt;code&gt;process.env&lt;/code&gt; scattered across your codebase? &lt;strong&gt;env-interpolation&lt;/strong&gt; is
a TypeScript library that resolves &lt;code&gt;${VAR}&lt;/code&gt; and &lt;code&gt;${VAR:default}&lt;/code&gt; placeholders
inside config objects. It walks strings in objects/arrays and &lt;strong&gt;never touches
keys&lt;/strong&gt;, so shapes stay stable and predictable—perfect for layered configuration.&lt;/p&gt;
&lt;p&gt;I built it for &lt;strong&gt;mcp-tool-selector&lt;/strong&gt;, where I needed layered config without
leaking secrets or scattering &lt;code&gt;process.env&lt;/code&gt; calls. It became a sharp utility, so
I published it.&lt;/p&gt;
&lt;h2&gt;At a glance&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Resolves placeholders in &lt;strong&gt;values only&lt;/strong&gt; (objects/arrays), keys untouched&lt;/li&gt;
&lt;li&gt;Supports defaults and multi-pass resolution&lt;/li&gt;
&lt;li&gt;Zero deps, TS-first, Node 18+, ESM/CJS&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Docs &amp;amp; API:&lt;/strong&gt; read the
&lt;a href=&quot;https://github.com/magarcia/env-interpolation&quot;&gt;GitHub repo&lt;/a&gt; and the
&lt;a href=&quot;https://www.npmjs.com/package/env-interpolation&quot;&gt;npm package page&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Quick taste: load JSON → interpolate&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;config.json&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;api&amp;quot;: &amp;quot;${API_URL:https://api.example.com}&amp;quot;,
  &amp;quot;timeoutMs&amp;quot;: &amp;quot;${TIMEOUT:5000}&amp;quot;,
  &amp;quot;flags&amp;quot;: [&amp;quot;${PRIMARY:alpha}&amp;quot;, &amp;quot;${SECONDARY:beta}&amp;quot;],
  &amp;quot;service&amp;quot;: {
    &amp;quot;url&amp;quot;: &amp;quot;${SERVICE_URL:${API_URL}/v1}&amp;quot;,
    &amp;quot;headers&amp;quot;: { &amp;quot;x-tenant&amp;quot;: &amp;quot;${TENANT:public}&amp;quot; }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;load-config.ts&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;import { readFileSync } from &amp;quot;node:fs&amp;quot;;
import { interpolate } from &amp;quot;env-interpolation&amp;quot;;

const raw = readFileSync(&amp;quot;config.json&amp;quot;, &amp;quot;utf8&amp;quot;);
const input = JSON.parse(raw);

const resolved = interpolate(input, {
  API_URL: &amp;quot;https://api.example.com&amp;quot;,
  TIMEOUT: &amp;quot;8000&amp;quot;,
  TENANT: &amp;quot;public&amp;quot;,
});

console.log(resolved);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is my second AI-engineered project (the first was
&lt;a href=&quot;https://github.com/magarcia/mcp-server-giphy&quot;&gt;mcp-server-giphy&lt;/a&gt;), built with
multiple AI agents (Claude, Copilot, Gemini &amp;amp; Codex). If your configs span
files, environments, and tools, this should smooth a few rough edges.&lt;/p&gt;
&lt;h2&gt;Why Not Just Use process.env Directly?&lt;/h2&gt;
&lt;p&gt;Reading environment variables inline scatters configuration logic throughout
your codebase:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;// Scattered approach - hard to audit and test
const apiUrl = process.env.API_URL || &amp;quot;https://api.example.com&amp;quot;;
const timeout = parseInt(process.env.TIMEOUT || &amp;quot;5000&amp;quot;, 10);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With &lt;strong&gt;env-interpolation&lt;/strong&gt;, configuration stays centralized. Define the
structure once, and the library handles resolution:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;// Centralized approach - config.json is the source of truth
const config = interpolate(loadConfig(&amp;quot;config.json&amp;quot;), process.env);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This makes configs easier to audit, test, and share across environments.&lt;/p&gt;
&lt;h2&gt;When to Use env-interpolation&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multi-environment configs&lt;/strong&gt;: Development, staging, production with the same
structure&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Layered configuration&lt;/strong&gt;: Base config with environment-specific overrides&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool configuration&lt;/strong&gt;: CLI tools that need portable config files&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Monorepos&lt;/strong&gt;: Shared config templates across packages&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Related:&lt;/strong&gt; For secure secret storage without plaintext files, check out
&lt;a href=&quot;https://magarcia.io/cross-platform-secret-storage-with-cross-keychain/&quot;&gt;cross-keychain&lt;/a&gt;, which
stores credentials in your OS&apos;s native credential manager.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Asking AI to Build the Tool Instead of Doing the Task]]></title>
            <description><![CDATA[Stop asking AI to perform repetitive code changes directly. Instead, have it build a codemod. This technique reduced our migration errors by 95% and cut a 2-day task to 2 hours.]]></description>
            <link>https://magarcia.io/asking-ai-to-build-the-tool-instead-of-doing-the-task/</link>
            <guid isPermaLink="false">https://magarcia.io/asking-ai-to-build-the-tool-instead-of-doing-the-task/</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[developer-tools]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Fri, 06 Jun 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;What if the best way to use AI for code migrations isn&apos;t asking it to do the
work, but asking it to build the tool that does? This counterintuitive approach
transformed how our team handles large-scale refactoring, reducing errors by 95%
and completing migrations in hours instead of days.&lt;/p&gt;
&lt;h2&gt;The Problem&lt;/h2&gt;
&lt;p&gt;Any reasonably sized codebase demands large-scale changes: migrating libraries,
updating deprecated APIs, or refactoring components. The traditional AI approach
looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Hey AI, please update all tooltip components from @old-design-system to @new-design-system
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And then the problems begin:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The AI struggles to maintain context across hundreds of files&lt;/li&gt;
&lt;li&gt;Token consumption explodes as it processes each file&lt;/li&gt;
&lt;li&gt;Error rates increase with scale&lt;/li&gt;
&lt;li&gt;You spend more time fixing the AI&apos;s mistakes than doing the migration yourself&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The Better Approach&lt;/h2&gt;
&lt;p&gt;Instead of asking AI to perform the migration directly, we ask it to build a
tool that performs the migration. Here&apos;s how it works:&lt;/p&gt;
&lt;h3&gt;Step 1: Manual Migration&lt;/h3&gt;
&lt;p&gt;First, pick a representative example and migrate it manually. This serves two
purposes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You understand the exact transformation needed&lt;/li&gt;
&lt;li&gt;You have a concrete example to show the AI&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;// Before: Using old tooltip
import { Tooltip } from &apos;@old-design-system&apos;;

&amp;lt;Tooltip content=&amp;quot;Hello&amp;quot; position=&amp;quot;top&amp;quot;&amp;gt;
  &amp;lt;Button&amp;gt;Hover me&amp;lt;/Button&amp;gt;
&amp;lt;/Tooltip&amp;gt;

// After: Using new tooltip
import { Tooltip } from &apos;@new-design-system&apos;;

&amp;lt;Tooltip title=&amp;quot;Hello&amp;quot; placement=&amp;quot;top&amp;quot;&amp;gt;
  &amp;lt;Button&amp;gt;Hover me&amp;lt;/Button&amp;gt;
&amp;lt;/Tooltip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 2: Extract the Pattern&lt;/h3&gt;
&lt;p&gt;Get the diff of your changes and document both the old and new component
signatures:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-diff&quot;&gt;- import { Tooltip } from &apos;@old-design-system&apos;;
+ import { Tooltip } from &apos;@new-design-system&apos;;

- &amp;lt;Tooltip content={text} position={position}&amp;gt;
+ &amp;lt;Tooltip title={text} placement={position}&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Build the Automation&lt;/h3&gt;
&lt;p&gt;Now, instead of asking the AI to perform hundreds of similar changes, we ask it
to build a codemod:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Based on this migration example, build a codemod that:
1. Updates the import statement
2. Renames the &apos;content&apos; prop to &apos;title&apos;
3. Renames the &apos;position&apos; prop to &apos;placement&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The AI will generate a proper codemod using tools like jscodeshift that can be
run across your entire codebase.&lt;/p&gt;
&lt;h2&gt;Real-World Results&lt;/h2&gt;
&lt;p&gt;We recently used this approach at &lt;a href=&quot;https://buffer.com&quot;&gt;&lt;strong&gt;Buffer&lt;/strong&gt;&lt;/a&gt; to migrate
tooltip components from our legacy design system to a new one. The results were
impressive:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;95% success rate&lt;/strong&gt;: Most components migrated perfectly without manual
intervention&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;2 hours instead of 2 days&lt;/strong&gt;: The entire migration was completed in a
fraction of the expected time&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;5% edge cases&lt;/strong&gt;: The failures were weird corner cases and legacy tooltip
variants we didn&apos;t even know existed&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Compare this to our previous attempts where we asked AI to do the migration
directly:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;60% success rate&lt;/li&gt;
&lt;li&gt;Constant need for manual fixes&lt;/li&gt;
&lt;li&gt;Token limits hit frequently&lt;/li&gt;
&lt;li&gt;Inconsistent transformations across files&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Why This Works&lt;/h2&gt;
&lt;p&gt;AI excels at pattern recognition and code generation but struggles to maintain
context across large-scale operations. Asking it to build a tool plays to its
strengths:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Single focused task&lt;/strong&gt;: Building a codemod is one coherent task, not
hundreds of micro-tasks&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pattern abstraction&lt;/strong&gt;: The AI can focus on understanding the transformation
pattern rather than applying it&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Testable output&lt;/strong&gt;: You can test the codemod on a few files before running
it everywhere&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reusable&lt;/strong&gt;: The codemod can be shared with your team or used for similar
migrations&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;The Beauty of Throwaway Code&lt;/h2&gt;
&lt;p&gt;We never review the codemod code that the AI generates. Why? Its quality does
not matter — it runs once and gets deleted after the migration.&lt;/p&gt;
&lt;p&gt;This is the perfect scenario for &amp;quot;vibe coding&amp;quot; — letting AI generate code
without review. Only the outcome matters: Did the migration work? Are the
transformed files correct?&lt;/p&gt;
&lt;p&gt;Think about it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The codemod runs once, then gets deleted&lt;/li&gt;
&lt;li&gt;You review the actual changes in your pull request anyway&lt;/li&gt;
&lt;li&gt;If it fails, you iterate&lt;/li&gt;
&lt;li&gt;Nobody maintains or builds upon this code&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This mindset shift liberates you. Skip perfecting the migration script; focus on
the migrated code.&lt;/p&gt;
&lt;h2&gt;An Interesting Observation&lt;/h2&gt;
&lt;p&gt;While testing Claude Code on a similar migration task, I noticed something
fascinating. The AI started by making changes file-by-file, but after processing
a few files, it stopped and began writing migration scripts instead.&lt;/p&gt;
&lt;p&gt;It created multiple bash scripts for different edge cases instead of a unified
codemod — not perfect, but it shows that AI tools now recognize these patterns.
The AI autonomously realized that building a tool beats doing the task manually.&lt;/p&gt;
&lt;h2&gt;When to Use This Approach&lt;/h2&gt;
&lt;p&gt;This technique works best for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Library migrations&lt;/li&gt;
&lt;li&gt;API updates&lt;/li&gt;
&lt;li&gt;Component refactoring&lt;/li&gt;
&lt;li&gt;Any repetitive transformation with a clear pattern&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It&apos;s less suitable for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One-off changes&lt;/li&gt;
&lt;li&gt;Complex refactoring that requires human judgment&lt;/li&gt;
&lt;li&gt;Changes with no clear pattern&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The meta-lesson: have AI build the fishing rod, not catch each fish. When facing
a large-scale code change, resist dumping the entire task on AI. Instead:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Do one example manually&lt;/li&gt;
&lt;li&gt;Have AI build the automation&lt;/li&gt;
&lt;li&gt;Review and run the tool&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This approach transformed how our team handles migrations. Teammates I share it
with consistently marvel at the results. Use AI smarter, not less.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Related posts:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://magarcia.io/writing-powerful-claude-code-skills-with-npx-bun/&quot;&gt;Writing Powerful Claude Code Skills with npx + Bun&lt;/a&gt; -
Extend AI capabilities with custom skills&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://magarcia.io/why-i-switched-from-bun-to-deno-for-claude-code-skills/&quot;&gt;Why I Switched from Bun to Deno for Claude Code Skills&lt;/a&gt; -
Runtime considerations for AI tooling&lt;/li&gt;
&lt;/ul&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[A Comprehensive Guide to DMARC: Ensuring Email Integrity and Trust]]></title>
            <description><![CDATA[When building a digital project, ensuring your emails land safely and are trusted is paramount. But how do you navigate the maze of email security? Enter DMARC. Alongside its companions, SPF and DKIM, we delve deep into establishing email integrity and combating threats like email spoofing and phishing.]]></description>
            <link>https://magarcia.io/the-comprehensive-guide-to-dmarc-ensuring-email-integrity-and-trust/</link>
            <guid isPermaLink="false">https://magarcia.io/the-comprehensive-guide-to-dmarc-ensuring-email-integrity-and-trust/</guid>
            <category><![CDATA[security]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Mon, 11 Sep 2023 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;While developing &lt;a href=&quot;https://voices.ink/&quot;&gt;Voices.ink&lt;/a&gt;, a collaborative project with
&lt;a href=&quot;https://github.com/estermv&quot;&gt;@Ester Martí&lt;/a&gt; that transcribes voice notes into
Notion, I faced a common problem. How could I ensure our transactional emails
reached inboxes rather than spam folders — or worse, prevent attackers from
weaponizing our domain for phishing? My research led me to DMARC.&lt;/p&gt;
&lt;p&gt;Email forms the backbone of professional communication, yet its security
vulnerabilities remain startling. Cyber attackers constantly deceive through
email spoofing and phishing. As fraud escalates, robust solutions become
imperative. DMARC — an advanced email protocol — restores trust in our inboxes.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Why Do We Need DMARC?&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Email&apos;s inherent vulnerabilities enable domain impersonation, where attackers
send emails pretending to be from trusted sources. SPF (Sender Policy Framework)
and DKIM (DomainKeys Identified Mail) counter these threats but remain
imperfect. DMARC (Domain-based Message Authentication, Reporting &amp;amp; Conformance)
fills this gap by leveraging both SPF and DKIM.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Understanding DMARC&apos;s Significance&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;DMARC serves three main purposes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Authentication&lt;/strong&gt;: Ensuring that an email claiming to be from a specific
domain genuinely originates from that domain.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reporting&lt;/strong&gt;: Enabling domain recipients to report back to the sender about
DMARC evaluation results, thereby offering insights into potential issues.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Policy Enforcement&lt;/strong&gt;: Granting domain owners the power to specify how
unauthenticated emails should be handled.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://media.emailonacid.com/wp-content/uploads/2022/09/How-DMARC-Policy-Works.png&quot; alt=&quot;How DMARC Works Diagram&quot;&gt;
&lt;em&gt;Credit: Emailonacid (How DMARC policy works)&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;The Building Blocks of DMARC: SPF &amp;amp; DKIM&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;SPF&lt;/strong&gt; verifies that the email&apos;s sending server has the domain owner&apos;s
authorization. It uses a specific TXT record in the DNS, like:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;v=spf1 ip4:192.0.2.0/24 ip4:198.51.100.123 a:mail.example.com -all&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This record essentially says, &amp;quot;Only the IP range 192.0.2.0 to 192.0.2.255 and
198.51.100.123 are authorized to send emails for my domain.&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;v=spf1&lt;/code&gt;: This indicates the version of SPF being used, which is SPF
version 1.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ip4:192.0.2.0/24&lt;/code&gt;: Authorizes the IP range 192.0.2.0 to 192.0.2.255 to send
emails for the domain.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ip4:198.51.100.123&lt;/code&gt;: Authorizes the specific IP address 198.51.100.123.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;a:mail.example.com&lt;/code&gt;: Authorizes the IP address resolved from the domain name
mail.example.com.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-all&lt;/code&gt;: Specifies that no other hosts are allowed to send emails. (The &apos;-&apos; is
a hard fail, meaning emails from other sources should be rejected. &lt;code&gt;~all&lt;/code&gt;
would be a soft fail, suggesting they should be accepted but marked.)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;DKIM&lt;/strong&gt;, on the other hand, ensures the email&apos;s integrity by using
cryptographic signatures. A typical DKIM TXT DNS record might look like:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;v=DKIM1; p=MIGfMA0GCSqG...&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This record holds the public key used by receiving servers to decrypt the
email&apos;s DKIM signature and verify its authenticity.&lt;/p&gt;
&lt;p&gt;The record&apos;s name typically includes a selector prefix, allowing the domain to
have multiple DKIM keys. When sending an email, the server will mention which
selector it&apos;s using, guiding the receiving server to the right DNS record.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;v=DKIM1&lt;/code&gt;: This signifies the version of DKIM being used.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;p=&lt;/code&gt;: This is the public key that receiving servers use to decrypt the DKIM
signature in the email header. The actual key would be a long string
(truncated in the example).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;strong&gt;DMARC in Action&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;When DMARC is implemented, domain owners publish a DMARC policy in their TXT DNS
records (using the name &lt;code&gt;_dmarc&lt;/code&gt;), such as:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;v=DMARC1; p=reject; rua=mailto:reports@example.com; ruf=mailto:forensic@example.com; pct=100; aspf=r; adkim=r&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This record translates to: &amp;quot;If an email fails DMARC authentication, reject it.
And send aggregate DMARC reports to
&lt;a href=&quot;mailto:reports@example.com&quot;&gt;reports@example.com&lt;/a&gt;.&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;v=DMARC1&lt;/code&gt;: Indicates the DMARC version.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;p=reject&lt;/code&gt;: Policy to apply to emails that fail DMARC. Other values can be
&lt;code&gt;none&lt;/code&gt; or &lt;code&gt;quarantine&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rua=mailto:reports@example.com&lt;/code&gt;: Address where aggregate DMARC reports should
be sent.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ruf=mailto:forensic@example.com&lt;/code&gt;: Address where forensic (detailed) DMARC
reports should be sent.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pct=100&lt;/code&gt;: Percentage of emails to which the DMARC policy should be applied.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;aspf=r&lt;/code&gt;: SPF alignment mode. &apos;r&apos; means relaxed (default), while &apos;s&apos; stands
for strict.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;adkim=r&lt;/code&gt;: DKIM alignment mode. &apos;r&apos; is for relaxed, and &apos;s&apos; is for strict.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once an email is received, the receiving server validates it against SPF and
DKIM. For DMARC to pass, at least one of these, SPF or DKIM, must be valid and
aligned with the claimed domain. Emails failing this check are dealt with
according to the DMARC policy — they might be rejected, quarantined, or let
through with no action.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;The Takeaway&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Sophisticated phishing attacks make trust paramount in digital communication.
DMARC goes beyond email security — it ensures genuine emails reach recipients
while malicious ones stay blocked. For organizations and individuals alike,
implementing DMARC moves us toward safer digital communication.&lt;/p&gt;
&lt;p&gt;When reviewing your email security measures, remember: DMARC is not optional —
it is necessary.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Further Resources and Tools&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;For deeper exploration of DMARC:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DMARC Guide&lt;/strong&gt;: A comprehensive guide by the Global Cyber Alliance that
covers the nuances of DMARC in detail.
&lt;a href=&quot;https://www.globalcyberalliance.org/dmarc/&quot;&gt;Check it out here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DMARC Setup Checker&lt;/strong&gt;: An invaluable tool provided by the Global Cyber
Alliance. It not only checks if your DMARC is set up correctly but also offers
tips on rectifications if needed.
&lt;a href=&quot;https://dmarcguide.globalcyberalliance.org/#/&quot;&gt;Try the tool here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Check for undefined in JavaScript]]></title>
            <description><![CDATA[Something that everyone that has been working with JavaScript for a while has done is checking if a variable is undefined. In this article, I explain which are the different ways that you can use for it and the differences between them.]]></description>
            <link>https://magarcia.io/check-for-undefined-in-javascript/</link>
            <guid isPermaLink="false">https://magarcia.io/check-for-undefined-in-javascript/</guid>
            <category><![CDATA[javascript]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sun, 03 May 2020 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;If you write JavaScript regularly, you&apos;ve likely needed to check whether a
variable is &lt;code&gt;undefined&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;But, what is the best way to do it?&lt;/p&gt;
&lt;h2&gt;The intuitive way&lt;/h2&gt;
&lt;p&gt;Any programmer experienced in other languages will intuit:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;if (x === undefined) { ... }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This works without problems--almost.&lt;/p&gt;
&lt;p&gt;Direct comparison with &lt;code&gt;undefined&lt;/code&gt; works in all modern browsers. Old browsers,
however, allowed reassignment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;undefined = &amp;quot;new value&amp;quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With this reassignment, direct comparison fails.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://es5.github.io/#x15.1.1.3&quot;&gt;ECMAScript 5&lt;/a&gt; fixed this in 2009:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;15.1.1.3 undefined&lt;/strong&gt;&lt;br&gt;
The value of &lt;code&gt;undefined&lt;/code&gt; is &lt;strong&gt;undefined&lt;/strong&gt; (see 8.1). This property has the
attributes
&lt;code&gt;{ [[Writable]]: false, [[Enumerable]]: false, [[Configurable]]: false }&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;The “safe” way&lt;/h2&gt;
&lt;p&gt;If you support old browsers and worry about &lt;code&gt;undefined&lt;/code&gt; reassignment, use
alternative methods.&lt;/p&gt;
&lt;h3&gt;Reading the type&lt;/h3&gt;
&lt;p&gt;The
&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/typeof&quot;&gt;typeof operator&lt;/a&gt;
returns &lt;code&gt;&amp;quot;undefined&amp;quot;&lt;/code&gt; for undefined values:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;if (typeof x === &amp;quot;undefined&amp;quot;) { ... }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;code&gt;typeof&lt;/code&gt; does not throw an error for undeclared variables.&lt;/p&gt;
&lt;h3&gt;Using &lt;code&gt;void&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;The
&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/void&quot;&gt;void operator&lt;/a&gt;
also returns undefined:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;if (x === void(0)) { ... }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The zero has no special meaning. As MDN states:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The &lt;strong&gt;void operator&lt;/strong&gt; evaluates the given &lt;em&gt;expression&lt;/em&gt; and then returns
&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/undefined&quot;&gt;undefined&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Which way is better&lt;/h2&gt;
&lt;p&gt;As a consultant, I learned the best answer: it depends. Here are some tips.&lt;/p&gt;
&lt;p&gt;Follow existing codebase conventions. For new code running only on modern
browsers, use direct comparison--it&apos;s clear and readable even for JavaScript
beginners. For old browser support, create an &lt;code&gt;isUndefined&lt;/code&gt; function with your
preferred method inside. This expresses intent clearly.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[The power of MVP]]></title>
            <description><![CDATA[A minimum viable product (MVP) is often linked to the startup world but is a very useful tool in large corporations too. And to succeed with it is important to have engineering teams be completely engaged.]]></description>
            <link>https://magarcia.io/the-power-of-mvp/</link>
            <guid isPermaLink="false">https://magarcia.io/the-power-of-mvp/</guid>
            <category><![CDATA[mvp]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Wed, 29 Jan 2020 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;Much has been written about minimum viable products (MVP). I doubt I can add new
insights, but I can share my perspective from an experience with one client.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://magarcia.io/images/mvp.png&quot; alt=&quot;MVP - A skate first, then a bike and finally a car&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A minimum viable product (MVP) is a version of a product with just enough
features to satisfy early customers and provide feedback for future product
development.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Minimum_viable_product&quot;&gt;Wikipedia&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let me set the scene. The client was a leading European real estate
company—though the domain matters little here. The product team envisioned a
complex product to sell to their customers. To gauge user acceptance, we ran
multiple A/B tests to find the most successful approach. The product was
straightforward to implement, but deciding which users should see what was
another story. We had to account for user location, search location, last visit,
and more. The number of variables was large, so we ran the tests for nearly half
a year to gather meaningful data.&lt;/p&gt;
&lt;p&gt;This kind of experiment requires proper metrics, so we measured everything. Once
the test finished, the data analysis team did their job—and the results
disappointed us. They fell far short of expectations. Yet one small part of the
product resonated strongly with users, and a month later the product team built
a new product around it.&lt;/p&gt;
&lt;p&gt;Now the real story begins. The new product lacked the complex rules of the
previous one: no location logic, no visit history, no search patterns. Customers
would get a special section on the page to showcase their products and share it
as they liked. Since the user-facing part already existed from the previous
test, it was time to make it real and start selling.&lt;/p&gt;
&lt;p&gt;We held a meeting to define the architecture and implementation plan. Two
engineering teams, the head of technology, and the product owner—about 14 people
in virtual rooms across countries—discussed the best approach and estimated
completion. Someone pointed out that since customers had to purchase the
product, we&apos;d need to involve the commerce team. The products team would create
a new microservice, and the website team would consume and process that data to
enable the product for each customer.&lt;/p&gt;
&lt;p&gt;Complexity piled up again. Initial estimates put delivery at three months
minimum—probably more, given coordination across engineering, marketing, and
design. Listening to these discussions, I asked the product owner: &amp;quot;&lt;em&gt;How will
you sell this product in the short term?&lt;/em&gt;&amp;quot; The answer unlocked an easier path.
Initially, a small sales team would contact only the most important customers.
After that, they&apos;d evaluate which other customers might be interested.&lt;/p&gt;
&lt;p&gt;With this information, I proposed an unorthodox approach. Instead of building
the complex system, coordinating multiple teams, and blocking the launch until
everything was done—why not hardcode customer IDs? A few lines of logic, and
we&apos;re done. Each time the product sold, someone would email the team, and we&apos;d
add the ID in minutes. This solution doesn&apos;t scale, but it wasn&apos;t meant to.
Since sales would be handled manually at first, we could control the pace of
customer acquisition.&lt;/p&gt;
&lt;p&gt;Everyone received the proposal well. We could start selling earlier and relieve
pressure on the engineering teams. Within one week we had the product in place
and sold to the first customer. Over the following months, the teams worked at a
relaxed pace on the final implementation—one that would eliminate manual steps.&lt;/p&gt;
&lt;p&gt;That&apos;s the story. Here&apos;s what I learned:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Keep the implementation as simple as possible. You&apos;ll reach the market earlier
and may relax some deadlines along the way.&lt;/li&gt;
&lt;li&gt;Involve engineering teams early in product definition. Different perspectives
surface better solutions.&lt;/li&gt;
&lt;li&gt;Share the full project vision with engineers, not just their piece. Context
shapes better architecture.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In conclusion: I&apos;ll keep practicing what I consider one of the most important
traits in a developer—laziness. That instinct drives me to find the simplest,
fastest way to solve a problem.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[SOLID - Principles of Object-Oriented Design]]></title>
            <description><![CDATA[SOLID Principles are a valuable tool to write good object-oriented software. This article tries to put some light on the subject with simple explanations and examples for each principle using TypeScript.]]></description>
            <link>https://magarcia.io/solid-principles-of-object-oriented-design/</link>
            <guid isPermaLink="false">https://magarcia.io/solid-principles-of-object-oriented-design/</guid>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[patterns]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sat, 07 Dec 2019 00:00:00 GMT</pubDate>
            <content:encoded>&lt;blockquote&gt;
&lt;p&gt;This article is based on the work done by
&lt;a href=&quot;https://twitter.com/KayandraJT&quot;&gt;Samuel Oloruntoba&lt;/a&gt; in his article
&lt;a href=&quot;https://scotch.io/bar-talk/s-o-l-i-d-the-first-five-principles-of-object-oriented-design&quot;&gt;S.O.L.I.D: The First 5 Principles of Object Oriented Design&lt;/a&gt;
but using TypeScript instead of PHP for the examples.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;SOLID&lt;/strong&gt; is an acronym for the first five principles of the article
&lt;a href=&quot;http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod&quot;&gt;&lt;em&gt;Principles of Object-Oriented Design&lt;/em&gt;&lt;/a&gt;
by Robert C. Martin.&lt;/p&gt;
&lt;p&gt;These principles help you write maintainable and extensible code. They also help
you catch code smells, refactor easily, and practice agile development.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;S&lt;/strong&gt; stands for &lt;strong&gt;SRP&lt;/strong&gt; - Single Responsibility Principle&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;O&lt;/strong&gt; stands for &lt;strong&gt;OCP&lt;/strong&gt; - Open-Closed Principle&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;L&lt;/strong&gt; stands for &lt;strong&gt;LSP&lt;/strong&gt; - Liskov Substitution Principle&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;I&lt;/strong&gt; stands for &lt;strong&gt;ISP&lt;/strong&gt; - Interface Segregation Principle&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;D&lt;/strong&gt; stands for &lt;strong&gt;DIP&lt;/strong&gt; - Dependency Inversion Principle&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;SRP - Single Responsibility Principle&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;A software entity (classes, modules, functions, etc.) should have one, and
only one, reason to change.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;An entity should do only one thing. Single responsibility means &lt;strong&gt;work in
isolation&lt;/strong&gt;. If a software entity performs calculations, the only reason to
change it is when those calculations need to change.&lt;/p&gt;
&lt;p&gt;An example clarifies this principle. Say we must implement an application that
calculates the total area of given shapes and prints the result. Let&apos;s start
with our shape classes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;class Circle {
  public readonly radius: number;

  constructor(radius: number) {
    this.radius = radius;
  }
}

class Square {
  public readonly side: number;

  constructor(side: number) {
    this.side = side;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we create an &lt;code&gt;AreaCalculator&lt;/code&gt; class that is going to have the logic to sum
the area of our shapes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;class AreaCalculator {
  public readonly shapes: Shape[];

  constructor(shapes: Shape[]) {
    this.shapes = shapes;
  }

  public sum(): number {
    // logic to sum the areas
  }

  public output(): string {
    return `Sum of the areas of provided shapes: ${this.sum()}`
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To use &lt;code&gt;AreaCalculator&lt;/code&gt;, create an array of shapes, instantiate the class, and
display the output.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;const shapes: any[] = [new Circle(2), new Circle(3), new Square(5)];

const areas = new AreaCalculator(shapes);

console.log(areas.output());
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This implementation has a problem: &lt;code&gt;AreaCalculator&lt;/code&gt; handles both the area
calculation logic &lt;strong&gt;and&lt;/strong&gt; the output formatting. What if the user wants JSON
output?&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;Single Responsibility Principle&lt;/em&gt; addresses this. &lt;code&gt;AreaCalculator&lt;/code&gt; should
change only when the calculation logic changes, not when we want different
output formats.&lt;/p&gt;
&lt;p&gt;We fix this by creating a class whose sole responsibility is output formatting.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;const shapes: any[] = [new Circle(2), new Circle(3), new Square(5)];

const areas = new AreaCalculator(shapes);
const output = new Outputter(areas);

console.log(output.text());
console.log(output.json());
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now each class has one responsibility. Changing the calculation logic affects
only &lt;code&gt;AreaCalculator&lt;/code&gt;; changing the output format affects only &lt;code&gt;Outputter&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;OCP - Open-Closed Principle&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Software entities (classes, modules, functions, etc.) should be open for
extension, but closed for modification.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Software entities should be easy to extend without modifying the entity itself.&lt;/p&gt;
&lt;p&gt;Using the previous example, we want to add a new shape: the &lt;em&gt;Triangle&lt;/em&gt;. First,
examine the sum method in &lt;code&gt;AreaCalculator&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;class AreaCalculator {
  public readonly shapes: Shape[];

  constructor(shapes: Shape[]) {
    this.shapes = shapes;
  }

  public sum() {
    let sum: number = 0;

    for (let shape of this.shapes) {
      if (shape instanceof Circle) {
        sum += Math.PI * Math.pow(shape.radius, 2);
      } else if (shape instanceof Square) {
        sum += shape.side * shape.side;
      }
    }

    return sum;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This violates the &lt;em&gt;Open/Closed Principle&lt;/em&gt;: adding triangle support requires
modifying &lt;code&gt;AreaCalculator&lt;/code&gt; with a new &lt;code&gt;else if&lt;/code&gt; block.&lt;/p&gt;
&lt;p&gt;To fix this, move the area calculation to each shape class and define an
interface that describes what a shape can do.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;interface Shape {
  area(): number;
}

class Circle implements Shape {
  public readonly radius: number;

  constructor(radius: number) {
    this.radius = radius;
  }

  public area(): number {
    return Math.PI * Math.pow(this.radius, 2);
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now &lt;code&gt;AreaCalculator&lt;/code&gt; accepts any shape that implements the &lt;code&gt;Shape&lt;/code&gt; interface:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;class AreaCalculator {
  public readonly shapes: Shape[];

  constructor(shapes: Shape[]) {
    this.shapes = shapes;
  }

  public sum(): number {
    let sum: number = 0;

    for (let shape of this.shapes) {
      sum += shape.area();
    }

    return sum;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;LSP - Liskov Substitution Principle&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Derived class must be substitutable for their base class.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Objects in a program should be replaceable with instances of their subtypes
without altering the program&apos;s correctness. A subclass must preserve the
behavior and state semantics of its parent abstraction.&lt;/p&gt;
&lt;p&gt;Continuing with the &lt;code&gt;AreaCalculator&lt;/code&gt; class, now we want to create a
&lt;code&gt;VolumeCalculator&lt;/code&gt; class that extends &lt;code&gt;AreaCalculator&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;class VolumeCalculator extends AreaCalculator {
  public readonly shapes: Shape[];

  constructor(shapes: Shape[]) {
    this.shapes = shapes;
  }

  public sum(): number[] {
    // logic to calculate the volumes and then return
    // and array of output
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A detailed &lt;code&gt;Outputter&lt;/code&gt; class clarifies this example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;class Outputer {

  private calculator;

  constructor(calculator: AreaCalculator) {
    this.calculator = calculator;
  }

  public json(): string {
    return JSON.stringify({
      sum: this.calculator.sum();
    })
  }

  public text(): string {
    return `Sum of provided shapes: ${this.calculator.sum()}`;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With this implementation, if we try to run a code like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;const areas = new AreaCalculator(shapes2D);
const volumes = new VolumeCalculator(shapes3D);

console.log(&amp;quot;Areas - &amp;quot;, new Ouputter(areas).text());
console.log(&amp;quot;Volumes - &amp;quot;, new Ouputter(volumes).text());
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The program runs but produces inconsistent output:
&lt;code&gt;Areas - Sum of provided shapes: 42&lt;/code&gt; versus
&lt;code&gt;Volumes - Sum of provided shapes: 13, 15, 14&lt;/code&gt;. This breaks our expectations.&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;Liskov Substitution Principle&lt;/em&gt; is violated: &lt;code&gt;VolumeCalculator.sum()&lt;/code&gt;
returns an array of numbers, while &lt;code&gt;AreaCalculator.sum()&lt;/code&gt; returns a single
number.&lt;/p&gt;
&lt;p&gt;The fix: &lt;code&gt;VolumeCalculator.sum()&lt;/code&gt; must return a number, not an array.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;class VolumeCalculator extends AreaCalculator {

  // constructor

  public function sum(): number {
      // logic to calculate the volumes and then return
      // and array of output
      return sum;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;ISP - Interface Segregation Principle&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Make fine grained interfaces that are client specific.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Keep interfaces small so clients implement only the methods they need.&lt;/p&gt;
&lt;p&gt;{/* REVIEW: This explanation could be more detailed */}&lt;/p&gt;
&lt;p&gt;Our shape interface now includes volume calculation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;interface Shape {
  area(): number;
  volume(): number;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But &lt;code&gt;Square&lt;/code&gt; is a 2D shape with no volume, yet the interface forces it to
implement a &lt;code&gt;volume&lt;/code&gt; method.&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;Interface Segregation Principle&lt;/em&gt; leads us to split &lt;code&gt;Shape&lt;/code&gt; into separate
interfaces for 2D and 3D shapes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;interface Shape2D {
  area(): number;
}

interface Shape3D {
  volume(): number;
}

class Cuboid implements Shape2D, Shape3D {
  public area(): number {
    // calculate the surface area of the cuboid
  }

  public volume(): number {
    // calculate the volume of the cuboid
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;DIP - Dependency Inversion Principle&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Depend on abstractions, not on concretions.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;High-level modules should depend on abstractions, not on low-level modules.&lt;/p&gt;
&lt;p&gt;{/* REVIEW: This explanation could be more detailed */}&lt;/p&gt;
&lt;p&gt;This principle enables decoupling. Consider a &lt;code&gt;ShapeManager&lt;/code&gt; class that saves
shapes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;class ShapeManager {
  private database;

  constructor(database: MySQL) {
    this.database = database;
  }

  public load(name: string): Shape {}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;ShapeManager&lt;/code&gt; (high-level) depends directly on &lt;code&gt;MySQL&lt;/code&gt; (low-level), violating
the &lt;em&gt;Dependency Inversion Principle&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Changing databases would require editing &lt;code&gt;ShapeManager&lt;/code&gt;, also violating the
&lt;em&gt;Open-Closed Principle&lt;/em&gt;. The solution: depend on a &lt;code&gt;Database&lt;/code&gt; interface instead:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;interface Database {
  connect(): Connection;
}

class MySQL implements Database {
  public connect(): Connetion {
    // creates a connection
  }
}

class ShapeManager {
  private database;

  constructor(database: Database) {
    this.database = database;
  }

  public load(name: string): Shape {}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And now our high-level and low-level modules are depending on abstractions.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;SOLID&lt;/strong&gt; principles may seem difficult at first, and knowing when to apply
them takes practice. But with experience, applying these principles becomes
natural and intuitive.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Adaptive Media Serving using Service Workers]]></title>
            <description><![CDATA[Everyone has experienced how visiting a web site over a slow network connection usually takes ages to load. We are going to explore how to load different media content using the Network Information API.]]></description>
            <link>https://magarcia.io/adaptative-media-serving-using-service-workers/</link>
            <guid isPermaLink="false">https://magarcia.io/adaptative-media-serving-using-service-workers/</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[service-workers]]></category>
            <category><![CDATA[performance]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Mon, 17 Jun 2019 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;&lt;em&gt;Pairing with &lt;a href=&quot;https://github.com/estermv&quot;&gt;@Ester Martí&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Visiting a website over a slow network connection takes ages to load, making the
experience painful or impossible.&lt;/p&gt;
&lt;p&gt;Web developers often forget load performance while adding fancy features. But
users likely browse on mid-range or low-end mobile devices with 3G connections
at best--not the latest MacBook Pro on gigabit fiber.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://www.statista.com/statistics/241462/global-mobile-phone-website-traffic-share/&quot;&gt;In 2018, 52.2% of all global web pages were served to mobile phones.&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Performance matters, and media delivery consumes the most resources. We&apos;ll adapt
media delivery based on network connection using the
&lt;a href=&quot;http://wicg.github.io/netinfo/&quot;&gt;Network Information API&lt;/a&gt;. This improves an
experiment I built with &lt;a href=&quot;https://twitter.com/eduaquiles&quot;&gt;@Eduardo Aquiles&lt;/a&gt; as a
React component, similar to &lt;a href=&quot;https://mxb.dev/&quot;&gt;Max Böck&apos;s&lt;/a&gt; article on
&lt;a href=&quot;https://mxb.dev/blog/connection-aware-components/&quot;&gt;connection-aware components&lt;/a&gt;--but
using service workers.&lt;/p&gt;
&lt;h2&gt;The Network Information API&lt;/h2&gt;
&lt;p&gt;The Network Information API is a draft specification exposing device connection
information to JavaScript.&lt;/p&gt;
&lt;p&gt;The interface provides several network attributes. The most relevant here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;type:&lt;/strong&gt; The
&lt;a href=&quot;http://wicg.github.io/netinfo/#dfn-connection-type&quot;&gt;connection type&lt;/a&gt; that the
user agent is using. (e.g. ‘wifi’, ‘cellular’, ‘ethernet’, etc.)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;effectiveType&lt;/strong&gt; The
&lt;a href=&quot;http://wicg.github.io/netinfo/#dfn-effective-connection-type&quot;&gt;effective connection type&lt;/a&gt;
that is determined using a combination of recently observed
&lt;a href=&quot;http://wicg.github.io/netinfo/#dom-networkinformation-rtt&quot;&gt;rtt&lt;/a&gt; and
&lt;a href=&quot;http://wicg.github.io/netinfo/#dom-networkinformation-downlink&quot;&gt;downlink&lt;/a&gt;
values. (&lt;em&gt;&lt;a href=&quot;#effectivetype-values&quot;&gt;see table&lt;/a&gt;&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;saveData&lt;/strong&gt; Indicates when the user requested a reduced data usage.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;effectiveType values&lt;/h3&gt;
&lt;p&gt;&amp;lt;table&amp;gt;
&amp;lt;thead&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;th&amp;gt;ECT&amp;lt;/th&amp;gt;
&amp;lt;th&amp;gt;Minimum RTT (ms)&amp;lt;/th&amp;gt;
&amp;lt;th&amp;gt;Maximum downlink (Kbps)&amp;lt;/th&amp;gt;
&amp;lt;th&amp;gt;Explanation&amp;lt;/th&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;/thead&amp;gt;
&amp;lt;tbody&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td data-column=&amp;quot;ECT&amp;quot;&amp;gt;slow‑2g&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;RTT&amp;quot;&amp;gt;2000&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;Downlink&amp;quot;&amp;gt;50&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;Explanation&amp;quot;&amp;gt;
The network is suited for small transfers only such as text-only pages.
&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td data-column=&amp;quot;ECT&amp;quot;&amp;gt;2g&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;RTT&amp;quot;&amp;gt;1400&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;Downlink&amp;quot;&amp;gt;70&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;Explanation&amp;quot;&amp;gt;
The network is suited for transfers of small images.
&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td data-column=&amp;quot;ECT&amp;quot;&amp;gt;3g&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;RTT&amp;quot;&amp;gt;270&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;Downlink&amp;quot;&amp;gt;700&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;Explanation&amp;quot;&amp;gt;
The network is suited for transfers of large assets such as high
resolution images, audio, and SD video.
&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td data-column=&amp;quot;ECT&amp;quot;&amp;gt;4g&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;RTT&amp;quot;&amp;gt;0&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;Downlink&amp;quot;&amp;gt;∞&amp;lt;/td&amp;gt;
&amp;lt;td data-column=&amp;quot;Explanation&amp;quot;&amp;gt;
The network is suited for HD video, real-time video, etc.
&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;/tbody&amp;gt;
&amp;lt;caption align=&amp;quot;bottom&amp;quot;&amp;gt;
Table of{&amp;quot; &amp;quot;}
&amp;lt;a href=&amp;quot;http://wicg.github.io/netinfo/#dfn-effective-connection-type&amp;quot;&amp;gt;
effective connection types (ECT)
&amp;lt;/a&amp;gt;
&amp;lt;/caption&amp;gt;
&amp;lt;/table&amp;gt;&lt;/p&gt;
&lt;h3&gt;Browser support&lt;/h3&gt;
&lt;p&gt;The API lacks full browser support but works in the
&lt;a href=&quot;https://caniuse.com/#feat=netinfo&quot;&gt;most popular mobile browsers&lt;/a&gt;--where this
technique has the greatest impact.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://magarcia.io/images/caniuse.png&quot; alt=&quot;Browser support for Network Information API&quot;&gt;&lt;/p&gt;
&lt;p&gt;In fact, 70% of mobile users have this API enabled on their device.&lt;/p&gt;
&lt;h2&gt;Adaptive Media Serving&lt;/h2&gt;
&lt;p&gt;We&apos;ll serve different media resources based on &lt;code&gt;effectiveType&lt;/code&gt;. &amp;quot;Different
media&amp;quot; could mean switching between HD video, HD image, or low-quality image, as
&lt;a href=&quot;https://addyosmani.com/blog/adaptive-serving/&quot;&gt;Addy Osmani&lt;/a&gt; suggests.&lt;/p&gt;
&lt;p&gt;This example uses different compression levels for the same image.&lt;/p&gt;
&lt;p&gt;First, get the proper quality based on network conditions:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;function getMediaQuality() {
  const connection =
    navigator.connection ||
    navigator.mozConnection ||
    navigator.webkitConnection;

  if (!connection) {
    return &amp;quot;medium&amp;quot;;
  }

  switch (connection.effectiveType) {
    case &amp;quot;slow-2g&amp;quot;:
    case &amp;quot;2g&amp;quot;:
      return &amp;quot;low&amp;quot;;
    case &amp;quot;3g&amp;quot;:
      return &amp;quot;medium&amp;quot;;
    case &amp;quot;4g&amp;quot;:
      return &amp;quot;high&amp;quot;;
    default:
      return &amp;quot;low&amp;quot;;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Imagine an image server accepting a &lt;strong&gt;quality&lt;/strong&gt; query parameter (&lt;code&gt;low&lt;/code&gt;,
&lt;code&gt;medium&lt;/code&gt;, or &lt;code&gt;high&lt;/code&gt;). Set the quality in the &lt;code&gt;src&lt;/code&gt; attribute:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;img src=&amp;quot;http://images.magarcia.io/cute_cat?quality=low&amp;quot; alt=&amp;quot;Cute cat&amp;quot; /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const images = document.querySelectorAll(&amp;quot;img&amp;quot;);
images.forEach((img) =&amp;gt; {
  img.src = img.src.replace(&amp;quot;low&amp;quot;, getMediaQuality());
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The default quality is &lt;code&gt;low&lt;/code&gt;, so devices load the low-quality image first, then
upgrade on high-speed connections.&lt;/p&gt;
&lt;p&gt;The JavaScript gets all images and replaces the quality parameter based on
&lt;code&gt;getMediaQuality&lt;/code&gt;. For &lt;code&gt;low&lt;/code&gt; quality, no additional requests occur. For &lt;code&gt;medium&lt;/code&gt;
or &lt;code&gt;high&lt;/code&gt;, two requests happen: one for &lt;code&gt;low&lt;/code&gt; when parsing the &lt;code&gt;img&lt;/code&gt; tag,
another for better quality when JavaScript executes.&lt;/p&gt;
&lt;p&gt;This improves load times on slow networks but doubles requests on fast
connections, consuming extra data.&lt;/p&gt;
&lt;h2&gt;Using Service Workers&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://developers.google.com/web/fundamentals/primers/service-workers/&quot;&gt;Service workers&lt;/a&gt;
solve the double-request problem by intercepting browser requests and replacing
them with the appropriate quality.&lt;/p&gt;
&lt;p&gt;First, register the service worker:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;if (&amp;quot;serviceWorker&amp;quot; in navigator) {
  window.addEventListener(&amp;quot;load&amp;quot;, function () {
    navigator.serviceWorker.register(&amp;quot;/sw.js&amp;quot;).then(
      function (registration) {
        console.log(
          &amp;quot;ServiceWorker registration successful with scope: &amp;quot;,
          registration.scope,
        );
      },
      function (err) {
        console.log(&amp;quot;ServiceWorker registration failed: &amp;quot;, err);
      },
    );
  });
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add a fetch event listener that appends the right quality parameter to image
requests:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;self.addEventListener(&amp;quot;fetch&amp;quot;, function (event) {
  if (/\.jpg$|.png$|.webp$/.test(event.request.url)) {
    const url = event.request.url + `?quality=${getMediaQuality()}`;
    event.respondWith(fetch(url));
  }
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now omit the quality parameter from &lt;code&gt;img&lt;/code&gt; tags--the service worker handles it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;img src=“http://images.magarcia.io/cute_cat” alt=“Cute cat”/&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;The code&lt;/h2&gt;
&lt;p&gt;Find the complete, cleaner code in
&lt;a href=&quot;https://github.com/estermv/adaptative-media-serving&quot;&gt;this GitHub repo&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Further Reading&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://wicg.github.io/netinfo/&quot;&gt;Network Information API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/API/Network_Information_API&quot;&gt;Network Information API - Web APIs | MDN&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[BLoC Pattern with React Hooks]]></title>
            <description><![CDATA[About how to extract the business logic from the components of a React application using the BLoC pattern from Flutter, the new hooks API, and RxJS observables.]]></description>
            <link>https://magarcia.io/bloc-pattern-with-react-hooks/</link>
            <guid isPermaLink="false">https://magarcia.io/bloc-pattern-with-react-hooks/</guid>
            <category><![CDATA[react]]></category>
            <category><![CDATA[patterns]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Mon, 18 Feb 2019 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;The &lt;strong&gt;BLoC Pattern&lt;/strong&gt; has been designed by &lt;em&gt;Paolo Soares&lt;/em&gt; and &lt;em&gt;Cong Hui&lt;/em&gt;, from
Google and first presented during the &lt;em&gt;DartConf 2018&lt;/em&gt; (January 23-24, 2018).
&lt;a href=&quot;https://www.youtube.com/watch?v=PLHln7wHgPE&quot; title=&quot;BLoC Pattern Flutter&quot;&gt;See the video on YouTube&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;BLoC stands for &lt;strong&gt;B&lt;/strong&gt;usiness &lt;strong&gt;Lo&lt;/strong&gt;gic &lt;strong&gt;C&lt;/strong&gt;omponent. Initially conceived to
share code between Flutter and Angular Dart, it works independently of platform:
web application, mobile application, or back-end.&lt;/p&gt;
&lt;p&gt;It offers an alternative to the
&lt;a href=&quot;https://pub.dartlang.org/packages/flutter_redux&quot; title=&quot;Redux port for flutter&quot;&gt;Redux port for flutter&lt;/a&gt;
using Dart streams. We&apos;ll use Observables from &lt;a href=&quot;https://rxjs.dev/&quot; title=&quot;RxJS&quot;&gt;RxJS&lt;/a&gt;,
though &lt;a href=&quot;http://staltz.github.io/xstream/&quot; title=&quot;xstream&quot;&gt;xstream&lt;/a&gt; works equally well.&lt;/p&gt;
&lt;p&gt;In short, the BLoC will:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;contain business logic (ideally in bigger applications we will have multiple
BLoCs)&lt;/li&gt;
&lt;li&gt;rely &lt;em&gt;exclusively&lt;/em&gt; on the use of &lt;em&gt;Observables&lt;/em&gt; for both input (&lt;em&gt;Observer&lt;/em&gt;) and
output (&lt;em&gt;Observable&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;remain &lt;em&gt;platform independent&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;remain &lt;em&gt;environment independent&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How BLoC works?&lt;/h2&gt;
&lt;p&gt;Others have explained BLoC better than I will here, so I&apos;ll cover just the
basics.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://magarcia.io/images/bloc-schema.png&quot; alt=&quot;BLoC Schema&quot;&gt;&lt;/p&gt;
&lt;p&gt;The BLoC holds business logic; components know nothing about its internals.
Components send &lt;em&gt;events&lt;/em&gt; to the BLoC via &lt;em&gt;Observers&lt;/em&gt; and receive notifications
via &lt;em&gt;Observables&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;Implementing the BLoC&lt;/h2&gt;
&lt;p&gt;Here is a basic TypeScript search BLoC using RxJS:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;export class SearchBloc {
  private _results$: Observable&amp;lt;string[]&amp;gt;;
  private _preamble$: Observable&amp;lt;string&amp;gt;;
  private _query$ = new BehaviorSubject&amp;lt;string&amp;gt;(&amp;quot;&amp;quot;);

  constructor(private api: API) {
    this._results$ = this._query$.pipe(
      switchMap((query) =&amp;gt; {
        return observableFrom(this.api.search(query));
      }),
    );
    this._preamble$ = this.results$.pipe(
      withLatestFrom(this._query$, (_, q) =&amp;gt; {
        return q ? `Results for ${q}` : &amp;quot;All results&amp;quot;;
      }),
    );
  }

  get results$(): Observable&amp;lt;string[]&amp;gt; {
    return this._results$;
  }

  get preamble$(): Observable&amp;lt;string&amp;gt; {
    return this._preamble$;
  }

  get query(): Observer&amp;lt;string&amp;gt; {
    return this._query$;
  }

  dispose() {
    this._query$.complete();
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;results$&lt;/code&gt; and &lt;code&gt;preamble$&lt;/code&gt; expose asynchronous values that change when &lt;code&gt;query&lt;/code&gt;
changes.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;query&lt;/code&gt; exposes an &lt;code&gt;Observer&amp;lt;string&amp;gt;&lt;/code&gt; for components to add new values. Inside
&lt;code&gt;SearchBloc&lt;/code&gt;, &lt;code&gt;_query$: BehaviorSubject&amp;lt;string&amp;gt;&lt;/code&gt; serves as the stream source,
and the constructor declares &lt;code&gt;_results$&lt;/code&gt; and &lt;code&gt;_preamble$&lt;/code&gt; to respond to
&lt;code&gt;_query$&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Using it on React&lt;/h2&gt;
&lt;p&gt;To use it in React, create a BLoC instance and share it with child components
via React context.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;const searchBloc = new SearchBloc(new API());
const SearchContext = React.createContext(searchBloc);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Expose it using the context provider:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;const App = () =&amp;gt; {
  const searchBloc = useContext(SearchContext);

  useEffect(() =&amp;gt; {
    return searchBloc.dispose;
  }, [searchBloc]);

  return (
    &amp;lt;SearchContext.Provider&amp;gt;
      &amp;lt;SearchInput /&amp;gt;
      &amp;lt;ResultList /&amp;gt;
    &amp;lt;/SearchContext.Provider&amp;gt;
  );
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;useEffect&lt;/code&gt; returns the dispose method, completing the observer when the
component unmounts.&lt;/p&gt;
&lt;p&gt;Publish changes to the BLoC from the &lt;code&gt;SearchInput&lt;/code&gt; component:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;const SearchInput = () =&amp;gt; {
  const searchBloc = useContext(SearchContext);
  const [query, setQuery] = useState(&amp;quot;&amp;quot;);

  useEffect(() =&amp;gt; {
    searchBloc.query.next(query);
  }, [searchBloc, query]);

  return (
    &amp;lt;input
      type=&amp;quot;text&amp;quot;
      name=&amp;quot;Search&amp;quot;
      value={query}
      onChange={({ target }) =&amp;gt; setQuery(target.value)}
    /&amp;gt;
  );
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We get the BLoC via &lt;code&gt;useContext&lt;/code&gt;, then &lt;code&gt;useEffect&lt;/code&gt; publishes each query change
to the BLoC.&lt;/p&gt;
&lt;p&gt;Now the &lt;code&gt;ResultList&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;const ResultList = () =&amp;gt; {
  const searchBloc = useContext(SearchContext);
  const [results, setResults] = useState([]);

  useEffect(() =&amp;gt; {
    return searchBloc.results$.subscribe(setResults);
  }, [searchBloc]);

  return (
    &amp;lt;div&amp;gt;
      {results.map(({ id, name }) =&amp;gt; (
        &amp;lt;div key={id}&amp;gt;{name}&amp;lt;/div&amp;gt;
      ))}
    &amp;lt;/div&amp;gt;
  );
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We use &lt;code&gt;useContext&lt;/code&gt; to get the BLoC, then &lt;code&gt;useEffect&lt;/code&gt; subscribes to &lt;code&gt;results$&lt;/code&gt;
changes to update local state. Returning the subscription unsubscribes when the
component unmounts.&lt;/p&gt;
&lt;h2&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;The final code is straightforward with basic knowledge of &lt;em&gt;Observables&lt;/em&gt; and
&lt;em&gt;hooks&lt;/em&gt;. The code is readable and keeps business logic outside components. We
must remember to unsubscribe from observables and dispose the BLoC on unmount,
but custom hooks like &lt;code&gt;useBlocObservable&lt;/code&gt; and &lt;code&gt;useBlocObserver&lt;/code&gt; could solve
this. I plan to try this in a side project where I use this pattern.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Refactor TodoMVC with Redux Starter Kit]]></title>
            <description><![CDATA[Redux Starter Kit is a toolset to make clean and readable code when working with React and Redux. See an example of how you can refactor an existing application getting all the profit from Redux Starter Kit.]]></description>
            <link>https://magarcia.io/todomvc-redux-starter-kit/</link>
            <guid isPermaLink="false">https://magarcia.io/todomvc-redux-starter-kit/</guid>
            <category><![CDATA[react]]></category>
            <category><![CDATA[redux]]></category>
            <category><![CDATA[javascript]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sat, 26 Jan 2019 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;I&apos;ve worked with &lt;a href=&quot;https://reactjs.org/&quot;&gt;React&lt;/a&gt; for over two years. I started on
a large project that already used &lt;a href=&quot;https://redux.js.org/&quot;&gt;Redux&lt;/a&gt;. Jumping into
so much existing code felt overwhelming, especially with an unfamiliar
framework. Over time, I grew comfortable and experienced.&lt;/p&gt;
&lt;p&gt;Recently I discovered &lt;a href=&quot;https://redux-starter-kit.js.org/&quot;&gt;Redux Starter Kit&lt;/a&gt;
from the Redux team. This toolset provides utilities that simplify working with
Redux. One tool, &lt;code&gt;createReducer&lt;/code&gt;, follows a pattern I&apos;ve used for a while. It
reduces boilerplate and speeds up development, especially in new projects.&lt;/p&gt;
&lt;p&gt;To learn the toolset, I migrated an existing Redux codebase. For my example, I
chose the omnipresent &lt;a href=&quot;http://todomvc.com/&quot;&gt;TodoMVC&lt;/a&gt;, specifically the version
from the
&lt;a href=&quot;https://github.com/reduxjs/redux/tree/master/examples/todomvc&quot;&gt;Redux repository&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Starting point&lt;/h2&gt;
&lt;p&gt;The app has two main reducers: &lt;code&gt;visibilityFilter&lt;/code&gt; and &lt;code&gt;todos&lt;/code&gt;. Each has its own
actions, action creators, and selectors.&lt;/p&gt;
&lt;h2&gt;Visibility Filter&lt;/h2&gt;
&lt;p&gt;I started with the simpler reducer, then moved to the more complex one.&lt;/p&gt;
&lt;h3&gt;Reducer&lt;/h3&gt;
&lt;p&gt;The reducer from the Redux example is already simple and clear.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// reducers/visibilityFilter.js
import { SET_VISIBILITY_FILTER } from &amp;quot;../constants/ActionTypes&amp;quot;;
import { SHOW_ALL } from &amp;quot;../constants/TodoFilters&amp;quot;;

export default (state = SHOW_ALL, action) =&amp;gt; {
  switch (action.type) {
    case SET_VISIBILITY_FILTER:
      return action.filter;
    default:
      return state;
  }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Redux Starter Kit provides &lt;code&gt;createReducer&lt;/code&gt; for creating reducers. As I
mentioned, I already use this pattern and find it effective.&lt;/p&gt;
&lt;p&gt;Instead of creating a reducer with a &lt;code&gt;switch case&lt;/code&gt; statement, you pass the
initial state as the first parameter and an object mapping action types to
reducer functions (&lt;code&gt;(state, action) =&amp;gt; { /* reducer code */}&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;It reduces boilerplate and automatically handles the &lt;code&gt;default&lt;/code&gt; case with
&lt;code&gt;return state&lt;/code&gt;. The biggest benefit: improved readability.&lt;/p&gt;
&lt;p&gt;Here is the visibility filter reducer using &lt;code&gt;createReducer&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// reducers/visibilityFilter.js
import { createReducer } from &amp;quot;redux-starter-kit&amp;quot;;
import { SET_VISIBILITY_FILTER } from &amp;quot;../constants/ActionTypes&amp;quot;;
import { SHOW_ALL } from &amp;quot;../constants/TodoFilters&amp;quot;;

export default createReducer(SHOW_ALL, {
  [SET_VISIBILITY_FILTER]: (state, action) =&amp;gt; action.filter,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Actions creators&lt;/h3&gt;
&lt;p&gt;Now for the actions. The visibility filter has one action,
&lt;code&gt;SET_VISIBILITY_FILTER&lt;/code&gt;, with a simple creator:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// actions/index.js
import * as types from &amp;quot;../constants/ActionTypes&amp;quot;;

/* ... Other actions ...*/
export const setVisibilityFilter = (filter) =&amp;gt; ({
  type: types.SET_VISIBILITY_FILTER,
  filter,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The toolset provides &lt;code&gt;createAction&lt;/code&gt;, which takes only the action type as a
parameter and returns an action creator.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// actions/index.js
import * as types from &amp;quot;../constants/ActionTypes&amp;quot;;

/* ... Other actions ...*/
export const setVisibilityFilter = createAction(types.SET_VISIBILITY_FILTER);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This action creator accepts optional parameters. Any argument becomes the
action&apos;s payload:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;const setVisibilityFilter = createAction(&amp;quot;SET_VISIBILITY_FILTER&amp;quot;);

let action = setVisibilityFilter();
// { type: &apos;SET_VISIBILITY_FILTER&apos; }

action = setVisibilityFilter(&amp;quot;SHOW_COMPLETED&amp;quot;);
// returns { type: &apos;SET_VISIBILITY_FILTER&apos;, payload: &apos;SHOW_COMPLETED&apos; }

setVisibilityFilter.toString();
// &apos;SET_VISIBILITY_FILTER&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now the filter uses the &lt;code&gt;payload&lt;/code&gt; key instead of &lt;code&gt;filter&lt;/code&gt;. This requires a small
reducer change:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// reducers/visibilityFilter.js
import { createReducer } from &amp;quot;redux-starter-kit&amp;quot;;
import { SET_VISIBILITY_FILTER } from &amp;quot;../constants/ActionTypes&amp;quot;;
import { SHOW_ALL } from &amp;quot;../constants/TodoFilters&amp;quot;;

export default createReducer(SHOW_ALL, {
  [SET_VISIBILITY_FILTER]: (state, action) =&amp;gt; action.payload,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Selectors&lt;/h3&gt;
&lt;p&gt;Selectors are one of the best choices when working with React. They let you
refactor state structure without changing every component that consumes it.&lt;/p&gt;
&lt;p&gt;The visibility filter selector is straightforward:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// selectors/index.js
const getVisibilityFilter = (state) =&amp;gt; state.visibilityFilter;

/* ... Other selectors ...*/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using &lt;code&gt;createSelector&lt;/code&gt; adds a bit more code, but the payoff comes soon. Keep
reading.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// selectors/index.js
import { createSelector } from &amp;quot;redux-starter-kit&amp;quot;;

const getVisibilityFilter = createSelector([&amp;quot;visibilityFilter&amp;quot;]);

/* ... Other selectors ...*/
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Slices&lt;/h3&gt;
&lt;p&gt;So far, we&apos;ve replaced simple functions with simpler ones using various
creators. Now comes the real power of the toolset: &lt;code&gt;createSlice&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;createSlice&lt;/code&gt; accepts an initial state, reducer functions, and an optional slice
name. It automatically generates action creators, action types, and selectors.&lt;/p&gt;
&lt;p&gt;Now we can discard all the previous code.&lt;/p&gt;
&lt;p&gt;Creating a slice for the visibility filter is clean and eliminates significant
boilerplate.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// ducks/visibilityFilter.js
import { createSlice } from &amp;quot;redux-starter-kit&amp;quot;;

export default createSlice({
  slice: &amp;quot;visibilityFilter&amp;quot;,
  initialState: SHOW_ALL,
  reducers: {
    setVisibilityFilter: (state, action) =&amp;gt; action.payload,
  },
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The result is a single object containing everything needed to work with Redux:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;const reducer = combineReducers({
  visibilityFilter: visibilityFilter.reducer,
});

const store = createStore(reducer);

store.dispatch(visibilityFilter.actions.setVisibilityFilter(SHOW_COMPLETED));
// -&amp;gt; { visibilityFilter: &apos;SHOW_COMPLETED&apos; }

const state = store.getState();
console.log(visibilityFilter.selectors.getVisibilityFilter(state));
// -&amp;gt; SHOW_COMPLETED
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;See all changes so far in
&lt;a href=&quot;https://github.com/magarcia/todomvc-redux-starter-kit/commit/ae78e0aacd4827786a63f29db4d6f4e0a2079422&quot;&gt;this commit&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Todos&lt;/h2&gt;
&lt;p&gt;The todos reducer is more complex, so I&apos;ll explain the final result rather than
each step. See the
&lt;a href=&quot;https://github.com/magarcia/todomvc-redux-starter-kit/blob/ba531a2ea7c2c5ee8148e2a1ab491e7e0a31e819/src/ducks/todos.js&quot;&gt;complete code here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First, define the initial state:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// ducks/todos.js
const initialState = [
  {
    text: &amp;quot;Use Redux&amp;quot;,
    completed: false,
    id: 0,
  },
];
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To improve readability, I extracted each reducer action into its own function:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// ducks/todos.js
const addTodo = (state, action) =&amp;gt; [
  ...state,
  {
    id: state.reduce((maxId, todo) =&amp;gt; Math.max(todo.id, maxId), -1) + 1,
    completed: false,
    text: action.payload.text,
  },
];

const deleteTodo = (state, action) =&amp;gt;
  state.filter((todo) =&amp;gt; todo.id !== action.payload.id);

const editTodo = (state, action) =&amp;gt;
  state.map((todo) =&amp;gt;
    todo.id === action.payload.id
      ? { ...todo, text: action.payload.text }
      : todo,
  );

const completeTodo = (state, action) =&amp;gt;
  state.map((todo) =&amp;gt;
    todo.id === action.payload.id
      ? { ...todo, completed: !todo.completed }
      : todo,
  );
const completeAllTodos = (state) =&amp;gt; {
  const areAllMarked = state.every((todo) =&amp;gt; todo.completed);
  return state.map((todo) =&amp;gt; ({
    ...todo,
    completed: !areAllMarked,
  }));
};

const clearCompleted = (state) =&amp;gt;
  state.filter((todo) =&amp;gt; todo.completed === false);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now combine them in a slice:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// ducks/todos.js
const todos = createSlice({
  slice: &amp;quot;todos&amp;quot;,
  initialState,
  reducers: {
    add: addTodo,
    delete: deleteTodo,
    edit: editTodo,
    complete: completeTodo,
    completeAll: completeAllTodos,
    clearCompleted: clearCompleted,
  },
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By default, &lt;code&gt;createSlice&lt;/code&gt; selectors simply return state values (e.g.,
&lt;code&gt;todos.selectors.getTodos&lt;/code&gt;). This application needs more complex selectors.&lt;/p&gt;
&lt;p&gt;For example, &lt;code&gt;getVisibleTodos&lt;/code&gt; needs both the visibility filter and todos.
&lt;code&gt;createSelector&lt;/code&gt; takes an array of selectors (or state paths) as its first
parameter and a function implementing the selection logic as its second.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// ducks/todos.js
const { getVisibilityFilter } = visibilityFilter.selectors;

todos.selectors.getVisibleTodos = createSelector(
  [getVisibilityFilter, todos.selectors.getTodos],
  (visibilityFilter, todos) =&amp;gt; {
    switch (visibilityFilter) {
      case SHOW_ALL:
        return todos;
      case SHOW_COMPLETED:
        return todos.filter((t) =&amp;gt; t.completed);
      case SHOW_ACTIVE:
        return todos.filter((t) =&amp;gt; !t.completed);
      default:
        throw new Error(&amp;quot;Unknown filter: &amp;quot; + visibilityFilter);
    }
  },
);

todos.selectors.getCompletedTodoCount = createSelector(
  [todos.selectors.getTodos],
  (todos) =&amp;gt;
    todos.reduce((count, todo) =&amp;gt; (todo.completed ? count + 1 : count), 0),
);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I added the new selectors to the &lt;code&gt;todos.selectors&lt;/code&gt; object, keeping all selectors
in one place.&lt;/p&gt;
&lt;h2&gt;Create Store&lt;/h2&gt;
&lt;p&gt;The library also provides &lt;code&gt;configureStore&lt;/code&gt; and &lt;code&gt;getDefaultMiddleware&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;configureStore&lt;/code&gt; wraps Redux&apos;s &lt;code&gt;createStore&lt;/code&gt;. It offers the same functionality
with a cleaner API--enabling developer tools requires just a boolean.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;getDefaultMiddleware&lt;/code&gt; returns
&lt;code&gt;[immutableStateInvariant, thunk, serializableStateInvariant]&lt;/code&gt; in development
and &lt;code&gt;[thunk]&lt;/code&gt; in production.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;redux-immutable-state-invariant&lt;/code&gt;: Detects mutations in reducers during
dispatch and between dispatches (in selectors or components).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;serializable-state-invariant-middleware&lt;/code&gt;: Checks state and actions for
non-serializable values like functions and Promises.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;// store.js
import { configureStore, getDefaultMiddleware } from &amp;quot;redux-starter-kit&amp;quot;;
import { combineReducers } from &amp;quot;redux&amp;quot;;
import { visibilityFilter, todos } from &amp;quot;./ducks&amp;quot;;

const preloadedState = {
  todos: [
    {
      text: &amp;quot;Use Redux&amp;quot;,
      completed: false,
      id: 0,
    },
  ],
};

const reducer = combineReducers({
  todos: todos.reducer,
  visibilityFilter: visibilityFilter.reducer,
});

const middleware = [...getDefaultMiddleware()];

export const store = configureStore({
  reducer,
  middleware,
  devTools: process.env.NODE_ENV !== &amp;quot;production&amp;quot;,
  preloadedState,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;Redux Starter Kit reduces boilerplate, making code cleaner and easier to
understand. It also speeds up development.&lt;/p&gt;
&lt;p&gt;Source Code: https://github.com/magarcia/todomvc-redux-starter-kit&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[$watch with Angular 2]]></title>
            <description><![CDATA[How to migrate the $watch feature from AngularJS to Angular2]]></description>
            <link>https://magarcia.io/watch-with-angular2/</link>
            <guid isPermaLink="false">https://magarcia.io/watch-with-angular2/</guid>
            <category><![CDATA[gsoc]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Mon, 04 Jul 2016 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;In my &lt;a href=&quot;https://magarcia.io/2016/07/03/events-in-angular2/&quot;&gt;previous post&lt;/a&gt;, I was talking about how
to implement events from Angular 1 in Angular 2. But in the snippet of code that
I use as an example we can find another thing that not exists in Angular 2:
&lt;code&gt;$watch&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I start defining the problem. We can have a directive or Angular 1 component
like that:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;var module = angular.module(&amp;quot;myApp&amp;quot;);

module.directive(&apos;exampleDirective&apos;, function () {
  return {
    template: &apos;&amp;lt;div&amp;gt;{{internalVar}}&amp;lt;/div&amp;gt;&apos;,
    scope: {
      externalVar: &amp;quot;=&amp;quot;
    },
    controller: function(scope, element) {
      scope.$watch(&apos;externalVar&apos;, function(newVal, oldVal) {
        if (newVal !== oldVal) {
          scope.internalVar = newVal;
        }
      });
    }
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If we want to migrate this code to Angular 2 we find a trouble: the new Angular
don&apos;t have &lt;code&gt;scope&lt;/code&gt;, so it don&apos;t have &lt;code&gt;$watch&lt;/code&gt;. How we can watch a directive
attribute? The solution is the &lt;strong&gt;set&lt;/strong&gt; syntax from ES6.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The &lt;strong&gt;set&lt;/strong&gt; syntax binds an object property to a function to be called when
there is an attempt to set that property.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;em&gt;From
&lt;a href=&quot;https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/set&quot;&gt;MDN&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;So we can bind the input for a component to a function that does the same as the
&lt;code&gt;$watch&lt;/code&gt; function.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;import { Component, Input } from &amp;quot;@angular/core&amp;quot;;

@Component({
  selector: &amp;quot;example-component&amp;quot;,
})
export class ExampleComponent {
  public internalVal = null;

  constructor() {}

  @Input(&amp;quot;externalVal&amp;quot;)
  set updateInternalVal(externalVal) {
    this.internalVal = externalVal;
  }
}
&lt;/code&gt;&lt;/pre&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Events in Angular2]]></title>
            <description><![CDATA[About how to migrate events from AngularJS to Angular2.]]></description>
            <link>https://magarcia.io/events-in-angular2/</link>
            <guid isPermaLink="false">https://magarcia.io/events-in-angular2/</guid>
            <category><![CDATA[gsoc]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sun, 03 Jul 2016 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;During the feed component migration, I encountered code I did not know how to
port to Angular 2:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;if (!feed.isLocalScreen) {
  // Until this timeout is reached, the &amp;quot;you are muted&amp;quot; notification
  // will not be displayed again
  var mutedWarningTimeout = now();

  scope.$on(&amp;quot;muted.byRequest&amp;quot;, function () {
    mutedWarningTimeout = secondsFromNow(3);
    MuteNotifier.muted();
  });

  scope.$on(&amp;quot;muted.byUser&amp;quot;, function () {
    // Reset the warning timeout
    mutedWarningTimeout = now();
  });

  scope.$on(&amp;quot;muted.Join&amp;quot;, function () {
    mutedWarningTimeout = now();
    MuteNotifier.joinedMuted();
  });

  scope.$watch(&amp;quot;vm.feed.isVoiceDetected()&amp;quot;, function (newVal) {
    // Display warning only if muted (check for false, undefined means
    // still connecting) and the timeout has been reached
    if (
      newVal &amp;amp;&amp;amp;
      feed.getAudioEnabled() === false &amp;amp;&amp;amp;
      now() &amp;gt; mutedWarningTimeout
    ) {
      MuteNotifier.speaking();
      mutedWarningTimeout = secondsFromNow(60);
    }
  });
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When the condition is true, the directive listens for &lt;code&gt;muted.byRequest&lt;/code&gt;,
&lt;code&gt;muted.byUser&lt;/code&gt;, and &lt;code&gt;muted.Join&lt;/code&gt; events. The event-handling code is
straightforward (ignoring &lt;code&gt;$watch&lt;/code&gt; for now).&lt;/p&gt;
&lt;p&gt;But, wait a minute, I have read the documentation of Angular 2 like a hundred
times and I don&apos;t remember nothing about &amp;quot;events&amp;quot; with Angular 1.X style. That&apos;s
because it not exist. Angular 2 don&apos;t have a way to make events like in Angular
1, so I have to find a solution. After a search for a solution, I found
&lt;a href=&quot;http://blog.lacolaco.net/post/event-broadcasting-in-angular-2/&quot;&gt;this entry&lt;/a&gt; in
laco&apos;s blog.&lt;/p&gt;
&lt;h2&gt;Broadcaster&lt;/h2&gt;
&lt;p&gt;Basically, the idea is to make a service that implements the &lt;code&gt;$broadcast&lt;/code&gt; and
&lt;code&gt;$on&lt;/code&gt; a method as we had in &lt;code&gt;$rootScope&lt;/code&gt;. To do this we use Observables, very
importants in Angular 2, and for this case, we use a
&lt;a href=&quot;https://github.com/Reactive-Extensions/RxJS/blob/master/doc/gettingstarted/subjects.md&quot;&gt;Subject&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;import { Subject } from &amp;quot;rxjs/Subject&amp;quot;;
import { Observable } from &amp;quot;rxjs/Observable&amp;quot;;
import &amp;quot;rxjs/add/operator/filter&amp;quot;;
import &amp;quot;rxjs/add/operator/map&amp;quot;;

interface BroadcastEvent {
  key: any;
  data?: any;
}

export class Broadcaster {
  private _eventBus: Subject&amp;lt;BroadcastEvent&amp;gt;;

  constructor() {
    this._eventBus = new Subject&amp;lt;BroadcastEvent&amp;gt;();
  }

  broadcast(key: any, data?: any) {
    this._eventBus.next({ key, data });
  }

  on&amp;lt;T&amp;gt;(key: any): Observable&amp;lt;T&amp;gt; {
    return this._eventBus
      .asObservable()
      .filter((event) =&amp;gt; event.key === key)
      .map((event) =&amp;gt; &amp;lt;T&amp;gt;event.data);
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So, now we can start to use events like in the example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;// child.ts
@Component({
    selector: &apos;child&apos;
})
export class ChildComponent {
  constructor(private broadcaster: Broadcaster) {
  }

  registerStringBroadcast() {
    this.broadcaster.on&amp;lt;string&amp;gt;(&apos;MyEvent&apos;)
      .subscribe(message =&amp;gt; {
        ...
      });
  }

  emitStringBroadcast() {
    this.broadcaster.broadcast(&apos;MyEvent&apos;, &apos;some message&apos;);
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;How I solved the problem?&lt;/h2&gt;
&lt;p&gt;I didn&apos;t. These events are only to show the user information pop-ups about when
he is muted, so it&apos;s not a critical feature. By now these events are fired and
listen in different components, and some of it still implemented in Angular 1.4.&lt;/p&gt;
&lt;p&gt;This is a solution I want to share with you, but I&apos;m not sure if this will be
the way that I will use to solve the problem. Because these events probably
won&apos;t be necessary when I reimplement the &lt;code&gt;MuteNotifier&lt;/code&gt;.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Midterm]]></title>
            <description><![CDATA[A retrospective about the work done during the Google Summer of code.]]></description>
            <link>https://magarcia.io/midterm/</link>
            <guid isPermaLink="false">https://magarcia.io/midterm/</guid>
            <category><![CDATA[gsoc]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Wed, 29 Jun 2016 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;GSoC midterm just passed. Time to review the work since the project started.
After these weeks working on Jangouts and using it regularly for follow-up
meetings, I love it as if it were my own. Working on this project is rewarding:
small enough to grasp, yet backed by a growing community. Jangouts remains in an
early stage but has great potential.&lt;/p&gt;
&lt;h2&gt;Work done&lt;/h2&gt;
&lt;p&gt;I missed my initial timeline, but I am close and the hardest part is over.
Jangouts now uses TypeScript with a new build/development process and runs as a
hybrid Angular 1.x/Angular 2 application.&lt;/p&gt;
&lt;p&gt;Jangouts is composed of different components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;browser-info&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;chat&lt;/code&gt; - &lt;strong&gt;Migrated&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;feed&lt;/code&gt; - &lt;strong&gt;Almost migrated&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;footer&lt;/code&gt; - &lt;strong&gt;Migrated&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;notifier&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;room&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;router&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;screen-share&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;user&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;videochat&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;signin&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each component migration involves conversion from Angular 1 to 2 and tests
targeting near 100% coverage. The most complex components to migrate are &lt;code&gt;feed&lt;/code&gt;
and &lt;code&gt;room&lt;/code&gt; because they handle video rendering and Janus backend communication.
The router will likely require a complete rewrite for the new Angular 2 router.&lt;/p&gt;
&lt;h2&gt;Mentors&lt;/h2&gt;
&lt;p&gt;I have only good things to say about @ancorgs and @imobach. We hold daily
meetings (when possible), and they give me feedback while allowing me freedom to
make my own decisions (when I provide reasons).&lt;/p&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;In the coming weeks, I will continue migrating components until Angular 1 can be
removed. When migration finishes, Jangouts will be an Angular 2 project with a
comprehensive test suite. My GSoC work will be complete, but I want to do more.
Many things can improve:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Restructure the project by moving logic from components to services.&lt;/li&gt;
&lt;li&gt;Leverage Observables better (probably using
&lt;a href=&quot;https://github.com/ngrx/store&quot;&gt;@ngrx/store&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Improve UI and mobile UX using
&lt;a href=&quot;https://developers.google.com/web/progressive-web-apps/&quot;&gt;progressive web app&lt;/a&gt;
concepts.&lt;/li&gt;
&lt;li&gt;Improve communication and community (project webpage, better contribution
docs, etc.)&lt;/li&gt;
&lt;/ol&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[AngularBeers with Miško Hevery]]></title>
            <description><![CDATA[Upcoming features that will make Angular 2 a powerful option in a near future.]]></description>
            <link>https://magarcia.io/angularbeers-with-misko-hevery/</link>
            <guid isPermaLink="false">https://magarcia.io/angularbeers-with-misko-hevery/</guid>
            <category><![CDATA[angular]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sun, 26 Jun 2016 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;Last Tuesday I attended a talk by &lt;a href=&quot;http://misko.hevery.com/about/&quot;&gt;Miško Hevery&lt;/a&gt;
about Angular 2, organized by
&lt;a href=&quot;http://www.meetup.com/AngularJS-Beers/&quot;&gt;AngularBeers&lt;/a&gt;. The key takeaway:
Angular is evolving from a frontend framework into a full platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://magarcia.io/images/angularbeers-with-misko-hevery.jpg&quot; alt=&quot;miskohevery&quot;&gt; &lt;em&gt;Sara (a good
coworker and better friend), Miško and me&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Two upcoming features will make Angular 2 particularly powerful.&lt;/p&gt;
&lt;h2&gt;Offline compile&lt;/h2&gt;
&lt;p&gt;Templates have been error-prone since Angular 1. Even with TypeScript or lint
tools, template errors remain undetected until runtime. Angular 1.X compiles
templates each time they render.&lt;/p&gt;
&lt;p&gt;Angular 2 (without offline compile) compiles templates only once. With offline
compiling, templates compile to JavaScript at build time, eliminating browser
compilation. The benefits: static type-checking of templates with TypeScript, no
runtime compilation, and smaller library size.&lt;/p&gt;
&lt;h2&gt;Angular Universal&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Universal (isomorphic) JavaScript support for Angular 2.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Angular Universal enables server-side Angular 2, providing several advantages:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Better Perceived Performance:&lt;/strong&gt; Users instantly see a server-rendered view,
improving perceived performance and user experience.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Optimized for Search Engines:&lt;/strong&gt; Server-side pre-rendering ensures all
search engines can access your content.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Site Preview:&lt;/strong&gt; Facebook, Twitter, and other social media apps correctly
display preview images. (I have struggled with this problem before---it is
&lt;em&gt;frustrating&lt;/em&gt;.)&lt;/li&gt;
&lt;/ol&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Moving big parts to Angular 2]]></title>
            <description><![CDATA[Migrating the most important components and directives to Angualar2]]></description>
            <link>https://magarcia.io/moving-big-parts-to-angular-2/</link>
            <guid isPermaLink="false">https://magarcia.io/moving-big-parts-to-angular-2/</guid>
            <category><![CDATA[gsoc]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sun, 12 Jun 2016 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;In a previous post I explained how I converted Jangouts to a hybrid Angular 1+2
application. This approach, instead of a full migration, has two objectives.
First, testing functionality becomes easier since Jangouts remains runnable
throughout. Second, if I cannot finish the migration, others can continue the
work. I hope this fallback proves unnecessary.&lt;/p&gt;
&lt;p&gt;With the hybrid approach in place, this week I migrated several components to
Angular 2. I started with the Chat component---more complex than the Footer, but
manageable for these early stages.&lt;/p&gt;
&lt;h2&gt;Migrating subcomponents&lt;/h2&gt;
&lt;p&gt;The Jangouts Chat component has three subcomponents besides the main component:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;chat-message&lt;/code&gt;&lt;/strong&gt;: Renders user messages.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;log-entry&lt;/code&gt;&lt;/strong&gt;: Renders system notifications (like &amp;quot;&lt;em&gt;User X has joined&lt;/em&gt;&amp;quot;).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;chat-form&lt;/code&gt;&lt;/strong&gt;: Handles message input.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These subcomponents are simple: each has a minimal component class and a
template. The key change: styles moved from the main &lt;code&gt;scss&lt;/code&gt; file to independent
files for each subcomponent. This leverages
&lt;a href=&quot;https://angular.io/docs/ts/latest/guide/component-styles.html#!#view-encapsulation&quot;&gt;Angular 2 View Encapsulation&lt;/a&gt;,
ensuring styles apply only to their component.&lt;/p&gt;
&lt;p&gt;During the &lt;code&gt;chat-message&lt;/code&gt; migration, I encountered a problem with
&lt;a href=&quot;https://github.com/ritz078/ng-embed&quot;&gt;ngEmbed&lt;/a&gt;, the library providing a
directive to render user messages. This directive enables emojis and embedded
links, images, and videos. The library lacks Angular 2 support, so I tried the
&lt;a href=&quot;https://angular.io/docs/ts/latest/guide/upgrade.html#!#how-the-upgrade-adapter-works&quot;&gt;Angular 2 Upgrade Adapter&lt;/a&gt;,
but encountered a strange error.&lt;/p&gt;
&lt;p&gt;Investigation revealed that ngEmbed uses a function as its &lt;code&gt;templateUrl&lt;/code&gt;
attribute (allowed in Angular 1). However, my current Angular 2 version&apos;s
upgrade adapter lacks support for function-based &lt;code&gt;templateUrl&lt;/code&gt;. The Angular 2
master branch includes this support, but no released version incorporates it
yet. After discussing with my mentors, we decided to disable this functionality
and continue the migration.&lt;/p&gt;
&lt;p&gt;I hope to re-enable it in the future.&lt;/p&gt;
&lt;h2&gt;Differentiate between component and directive&lt;/h2&gt;
&lt;p&gt;Migrating the main component proved more complex. It displays all messages (user
and system) in a view that auto-scrolls when new messages arrive. In old
Jangouts, one directive both rendered the message list and controlled
auto-scrolling. Angular 2 requires a different approach: components always have
templates and never interact with the DOM directly; directives never have
templates but can interact with the DOM.&lt;/p&gt;
&lt;p&gt;This meant splitting the main chat component into two parts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;component&lt;/strong&gt; to render the message list.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;directive&lt;/strong&gt; to handle auto-scrolling.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After migration, the component renders the message list and contains the
directive that handles auto-scrolling.&lt;/p&gt;
&lt;h2&gt;Putting all together&lt;/h2&gt;
&lt;p&gt;During subcomponent migration, I downgraded each one to Angular 1 compatibility
using Angular 2&apos;s adapter and tested manually with the old main component. When
I migrated the main component, its code became pure Angular 2 (without
downgraded subcomponents). Only the main chat component needed downgrading for
Angular 1 compatibility.&lt;/p&gt;
&lt;h2&gt;Applying the correct application structure&lt;/h2&gt;
&lt;p&gt;This week&apos;s changes extended beyond code. I also restructured the application
following the
&lt;a href=&quot;https://angular.io/styleguide#!#application-structure_&quot;&gt;style guide&lt;/a&gt;
recommendations. Before migration, the structure was:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;src
└── app
    ├── adapter.ts
    ├── variables.scss
    ├── index.scss
    ├── vendor.scss
    ├── index.ts
    ├── components
    │   ├── chat
    │   │   ├── chat-form.directive.html
    │   │   ├── chat-form.directive.js
    │   │   ├── chat.directive.html
    │   │   ├── chat.directive.js
    │   │   ├── chat-message.directive.html
    │   │   ├── chat-message.directive.js
    │   │   ├── log-entry.directive.html
    │   │   └── log-entry.directive.html
    │   ├── footer
    │   │   ├── footer.directive.html
    │   │   └── footer.directive.js
    │   └── [...]
    └── [...]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After the changes:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;src
└── app
    ├── adapter.ts
    ├── variables.scss
    ├── index.scss
    ├── vendor.scss
    ├── index.ts
    ├── chat
    │   ├── index.ts
    │   ├── chat.component.html
    │   ├── chat.component.scss
    │   ├── chat.component.spec.ts
    │   ├── chat.component.ts
    │   ├── chat-form
    │   │   ├── chat-form.component.html
    │   │   ├── chat-form.component.spec.ts
    │   │   ├── chat-form.component.ts
    │   │   └── index.ts
    │   ├── chat-message
    │   │   ├── chat-message.component.html
    │   │   ├── chat-message.component.scss
    │   │   ├── chat-message.component.spec.ts
    │   │   ├── chat-message.component.ts
    │   │   └── index.ts
    │   ├── log-entry
    │   │   ├── index.ts
    │   │   ├── log-entry.component.html
    │   │   ├── log-entry.component.spec.ts
    │   │   └── log-entry.component.ts
    │   └── message-autoscroll.directive.ts
    ├── footer
    │   ├── footer.component.html
    │   ├── footer.component.scss
    │   ├── footer.component.spec.ts
    │   ├── footer.component.ts
    │   └── index.ts
    ├── components
    │   └──  [...] // This contains the not migrated code
    └── [...]
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Currently working&lt;/h2&gt;
&lt;p&gt;I am now migrating the Feed component, one of the most complex in the
application due to its many services handling video/audio streams.&lt;/p&gt;
&lt;p&gt;I have moved all services and factories to Angular 2, but have not yet enabled
Angular 1 compatibility. The reason: I want a comprehensive test suite covering
these services before continuing the migration and integrating with the rest of
the application.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Components migration started]]></title>
            <description><![CDATA[Start the migration of components from AngularJS to Angular2 in Jangouts.]]></description>
            <link>https://magarcia.io/component-migration-started/</link>
            <guid isPermaLink="false">https://magarcia.io/component-migration-started/</guid>
            <category><![CDATA[gsoc]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sun, 05 Jun 2016 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;Another week finished. Exams limited my progress this week, but the project
still advanced significantly. Four key accomplishments:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Fixed the build process for production.&lt;/li&gt;
&lt;li&gt;Added test runner.&lt;/li&gt;
&lt;li&gt;Migrated the first component to Angular 2.&lt;/li&gt;
&lt;li&gt;Added the first test for a migrated component.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Fixing the build process&lt;/h2&gt;
&lt;p&gt;Last week the development environment was ready, and presumably the distribution
environment too. But throughout the week, I noticed the distributed build caused
grid element issues inside the video chat room. After testing various Webpack
configurations, I traced the problems to the uglify process.&lt;/p&gt;
&lt;p&gt;The issue involved the video room&apos;s grid layout system, implemented with
&lt;a href=&quot;http://manifestwebdesign.github.io/angular-gridster/&quot;&gt;angular-gridster&lt;/a&gt;. The
problem could stem from angular-gridster&apos;s styles or the JavaScript module.
First, I tried importing the library&apos;s CSS directly without Webpack. That
failed. The solution: use the source version instead of the minified version in
&lt;code&gt;vendors.ts&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Strange, since source and minified versions should behave identically.
Regardless, the source version fixes the issue, and Webpack minifies all code
anyway, including &lt;code&gt;vendor.ts&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Adding test runner&lt;/h2&gt;
&lt;p&gt;Easier than expected. I used
&lt;a href=&quot;https://github.com/AngularClass/angular2-webpack-starter&quot;&gt;angular2-webpack-starter&lt;/a&gt;
from &lt;a href=&quot;https://angularclass.com/&quot;&gt;@AngularClass&lt;/a&gt; as inspiration. After adapting
their &lt;code&gt;webpack.test.js&lt;/code&gt; and &lt;code&gt;karma.conf.js&lt;/code&gt; files with minor changes, everything
worked. Open source at its best: Don&apos;t Repeat Yourself.&lt;/p&gt;
&lt;h2&gt;Migrating the first component&lt;/h2&gt;
&lt;p&gt;I had awaited this moment since submitting my GSoC proposal. I started with the
simplest component in Jangouts: the footer. It displays only a
&lt;a href=&quot;https://www.suse.com/&quot;&gt;SUSE&lt;/a&gt; link and the Jangouts version.&lt;/p&gt;
&lt;p&gt;In the old Jangouts (pre-migration), the footer consisted of a simple Angular 1
directive with a template and a &lt;a href=&quot;http://jade-lang.com/&quot;&gt;&lt;code&gt;jade&lt;/code&gt;&lt;/a&gt; template. Gulp
rendered it, injecting the current version.&lt;/p&gt;
&lt;p&gt;The new footer remains simple: a
&lt;a href=&quot;https://github.com/magarcia/jangouts/blob/5db2d9de547d6d56aaed90c633b5d98ce64f6219/src/app/components/footer/jh-footer.directive.ts&quot;&gt;TypeScript file&lt;/a&gt;
(component definition with an empty class) and an
&lt;a href=&quot;https://github.com/magarcia/jangouts/blob/5db2d9de547d6d56aaed90c633b5d98ce64f6219/src/app/components/footer/jh-footer.html&quot;&gt;Angular 2 template&lt;/a&gt;.
The key difference: nothing modifies the template to inject the version.
&lt;em&gt;Instead&lt;/em&gt;, Webpack&apos;s
&lt;a href=&quot;https://webpack.js.org/plugins/define-plugin/&quot;&gt;DefinePlugin&lt;/a&gt; adds the version
as a global constant at build time, reading from &lt;code&gt;package.json&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Two benefits: the footer is now a pure Angular component, and it&apos;s easier to
test.&lt;/p&gt;
&lt;h2&gt;Adding tests&lt;/h2&gt;
&lt;p&gt;Adding tests throughout the platform is essential during the Angular 2
migration. The new footer component has its own tests. They simply verify that
the &lt;code&gt;version&lt;/code&gt; variable is defined correctly. But these tests serve a second
purpose: they confirm the test runner and coverage reporter work correctly.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[First coding week]]></title>
            <description><![CDATA[The first week working on the migration of Jangouts to Angular2.]]></description>
            <link>https://magarcia.io/first-coding-week/</link>
            <guid isPermaLink="false">https://magarcia.io/first-coding-week/</guid>
            <category><![CDATA[gsoc]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sun, 29 May 2016 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;The first week of GSoC 2016 coding period has ended. I started upgrading
Jangouts from Angular 1.x to Angular 2. I completed all tasks within the
deadline and hope to maintain this pace next week.&lt;/p&gt;
&lt;p&gt;I&apos;m following the
&lt;a href=&quot;https://angular.io/docs/ts/latest/guide/upgrade.html&quot;&gt;upgrade guide&lt;/a&gt; from
official Angular docs, which has two main blocks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Preparation&lt;/li&gt;
&lt;li&gt;Upgrading with The Upgrade Adapter&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I just finished the preparation block. Fortunately, the Jangouts code is clear
and already follows two key preparation requirements: the Angular style guide
and component directives. This left me only two tasks: switch from &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt;
tags to a module loader, and migrate from JavaScript to TypeScript. I reversed
the order, migrating to TypeScript first and then switching to a module loader.
This sequence felt more natural for this project.&lt;/p&gt;
&lt;h2&gt;Migrating to TypeScript&lt;/h2&gt;
&lt;p&gt;Jangouts has a working gulp build system, so I did not need to worry about
script loading. I focused first on migrating files to TypeScript, then leveraged
the &lt;code&gt;import&lt;/code&gt; syntax of TypeScript/ES6.&lt;/p&gt;
&lt;p&gt;Migrating code from JavaScript to TypeScript is straightforward: change the
extension from &lt;code&gt;.js&lt;/code&gt; to &lt;code&gt;.ts&lt;/code&gt;. The existing gulp system does not work with these
changes, so run &lt;code&gt;tsc --watch src/**/*.ts&lt;/code&gt; alongside gulp. This command shows
many errors, but if the JavaScript code is correct, these errors relate only to
TypeScript&apos;s type checking.&lt;/p&gt;
&lt;p&gt;During this migration, I also made the code more modular. Jangouts had all
components registered in a single Angular module &lt;code&gt;janusHangouts&lt;/code&gt;. From previous
projects, I learned this causes trouble with unit testing. I now define a
separate module for each component (&lt;code&gt;janusHangouts.componentName&lt;/code&gt;) and make it a
dependency of the main module. This has two advantages: easier testing, and
potentially loading components on demand with a module loader.&lt;/p&gt;
&lt;p&gt;As mentioned earlier, compiling JavaScript code with &lt;code&gt;tsc&lt;/code&gt; shows many errors.
One common error is:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;error TS7006: Parameter &apos;$state&apos; implicitly has an &apos;any&apos; type.&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The TypeScript compiler requires a type for all variables. To allow implicit
&lt;code&gt;any&lt;/code&gt; types for untyped variables, disable &lt;code&gt;noImplicitAny&lt;/code&gt; in &lt;code&gt;tsconfig.json&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Another error we can find when working with HTML elements is:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;error TS2339: Property &apos;muted&apos; does not exist on type &apos;HTMLElement&apos;.&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This error is produced from a code like that:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;var video = $(&amp;quot;video&amp;quot;, element)[0];
video.muted = true;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;TypeScript is type safe: &lt;code&gt;$(&apos;video&apos;, element)[0]&lt;/code&gt; returns &lt;code&gt;HTMLElement&lt;/code&gt;, which
lacks the &lt;code&gt;muted&lt;/code&gt; property. The subtype &lt;code&gt;HTMLVideoElement&lt;/code&gt; contains &lt;code&gt;muted&lt;/code&gt;.
Cast the result to &lt;code&gt;HTMLVideoElement&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;var video = &amp;lt;HTMLVideoElement&amp;gt;$(&apos;video&apos;, element)[0];
video.muted = true;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, another common error is:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;error TS2339: Property &apos;id&apos; does not exist on type &apos;{}&apos;.&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;TypeScript&apos;s type validation causes this error in code like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;var room = {};

// Some code here...

function isRoom(room) {
  return room.id == roomId;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Define an interface for the room object to fix this and reduce errors:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;interface Room {
  id?: number; // ? makes the attribute optional
}

// Some code here ...

var room: Room = {};

// Some code here...

function isRoom(room: Room) {
  return room.id == roomId;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Using a Module Loader&lt;/h2&gt;
&lt;p&gt;Why use a module loader? The Angular site explains:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Using a module loader such as
&lt;a href=&quot;https://github.com/systemjs/systemjs&quot;&gt;SystemJS&lt;/a&gt;,
&lt;a href=&quot;http://webpack.github.io/&quot;&gt;Webpack&lt;/a&gt;, or &lt;a href=&quot;http://browserify.org/&quot;&gt;Browserify&lt;/a&gt;
allows us to use the built-in module systems of the TypeScript or ES2015
languages in our apps. We can use the import and export features that
explicitly specify what code can and will be shared between different parts of
the application. [...]&lt;/p&gt;
&lt;p&gt;When we then take our applications into production, module loaders also make
it easier to package them all up into production bundles with batteries
included.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I discarded Browserify due to past bad experiences and tried only SystemJS and
Webpack.&lt;/p&gt;
&lt;h3&gt;SystemJS&lt;/h3&gt;
&lt;p&gt;SystemJS looks clean and simple. Define an entry point (typically the main
application file) and the &lt;code&gt;import&lt;/code&gt; syntax handles the rest. With correct
&lt;code&gt;import&lt;/code&gt; statements, everything works.&lt;/p&gt;
&lt;p&gt;However, this solution requires keeping gulp since SystemJS only handles
imports. This means adding the TypeScript compiler to gulp and disabling auto
script injection in HTML.&lt;/p&gt;
&lt;p&gt;Before rewriting the gulp configuration, I wanted to try Webpack first.&lt;/p&gt;
&lt;h3&gt;Webpack&lt;/h3&gt;
&lt;p&gt;Webpack configuration is more complex than SystemJS, but replaces gulp entirely.
Like SystemJS, we define an entry point and specify where &lt;code&gt;index.html&lt;/code&gt; is
located for JavaScript file inclusion.&lt;/p&gt;
&lt;p&gt;I had initial troubles, but after studying examples, I got a functional version.
Exploring Webpack further, I found what made me choose it: we can &lt;code&gt;import&lt;/code&gt; or
&lt;code&gt;require&lt;/code&gt; non-JavaScript files. We can require an Angular directive template,
and the build process includes it as a string variable inside the component.
Styles work the same way. This improves performance by bundling all files a
component needs into its JavaScript file, without complicating development.&lt;/p&gt;
&lt;h2&gt;One more thing&lt;/h2&gt;
&lt;p&gt;This summer looks exciting with everything I will learn through GSoC. Follow my
progress on this blog or through my GitHub contributions. I also published a
&lt;a href=&quot;https://trello.com/b/vtQJBxbf/jangouts&quot;&gt;Trello board&lt;/a&gt; with the project planning
and tasks (still being updated).&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[Ending Community Bonding Period]]></title>
            <description><![CDATA[The period of coding for the Google Summer of Code is going to start.]]></description>
            <link>https://magarcia.io/ending-community-bonding-period/</link>
            <guid isPermaLink="false">https://magarcia.io/ending-community-bonding-period/</guid>
            <category><![CDATA[gsoc]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sun, 22 May 2016 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;A few weeks have passed since my last post. I have been busy.&lt;/p&gt;
&lt;p&gt;Tomorrow the GSoC coding period begins. As mentioned previously, I have been
working on Jangouts: making contributions, fixing bugs, and more. Last week I
started writing tests for some components. A complete test suite would be wasted
effort since the code will change significantly during the Angular 2 migration.
These tests help me understand how Jangouts is structured and how it works.&lt;/p&gt;
&lt;p&gt;During the community bonding period, I stayed in contact with my mentors
(&lt;a href=&quot;https://github.com/imobach&quot;&gt;@imobach&lt;/a&gt; and
&lt;a href=&quot;https://github.com/ancorgs&quot;&gt;@ancorgs&lt;/a&gt;). This past week we had a Jangouts call
(we use Jangouts for meetings) to plan the coming weeks. We will use a Trello
board to organize tasks and hold daily meetings to track progress. I also
committed to writing a weekly blog post summarizing my work.&lt;/p&gt;
&lt;p&gt;The Angular 2 migration starts in the coming days, but first I need a plan.&lt;/p&gt;
</content:encoded>
        </item>
        <item>
            <title><![CDATA[First Weeks at GSoC 2016]]></title>
            <description><![CDATA[Joining Google Summer of Code Program to work on the migration of an application to Angular2.]]></description>
            <link>https://magarcia.io/starting-gsoc/</link>
            <guid isPermaLink="false">https://magarcia.io/starting-gsoc/</guid>
            <category><![CDATA[gsoc]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[contact@magarcia.io]]></dc:creator>
            <pubDate>Sat, 07 May 2016 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;This year I was selected for the Google Summer of Code Program with my first
proposal. I sought a project related to Angular, especially Angular 2, to deepen
my knowledge of this framework and its new version. I was thrilled to find
&lt;a href=&quot;https://github.com/openSUSE/mentoring/issues/16&quot;&gt;this idea&lt;/a&gt; from the OpenSuse
community.&lt;/p&gt;
&lt;p&gt;The community bonding period began April 22. I contacted my mentors and started
fixing bugs in
&lt;a href=&quot;https://github.com/jangouts/jangouts/pulls?utf8=%E2%9C%93&amp;amp;q=is%3Apr+author%3Amagarcia+created%3A%3C2016-05-07_&quot;&gt;Jangouts&lt;/a&gt;
while reporting issues to
&lt;a href=&quot;https://github.com/meetecho/janus-gateway/issues?utf8=%E2%9C%93&amp;amp;q=+is%3Aissue+author%3Amagarcia+created%3A%3C2016-05-07&quot;&gt;Janus-Gateway&lt;/a&gt;.
These posts will serve as a progress journal for me and my mentors.&lt;/p&gt;
&lt;p&gt;These weeks I explored the Jangouts codebase, experimented with Angular 2, and
followed &lt;a href=&quot;http://www.ng-conf.org&quot;&gt;ng-conf&lt;/a&gt;.&lt;/p&gt;
</content:encoded>
        </item>
    </channel>
</rss>