5 bookmarks for 2026-05-11

972.

A Lesson From the Cockpit

www.subbu.org/essays/2026/a-lesson-from-the-cockpit

The tech industry claims AI will boost developer productivity, but evidence is mixed. While AI may increase task-level productivity, its impact on end-to-end productivity in complex brownfield situations is unclear. The aviation industry’s experience with fly-by-wire automation offers insights, highlighting the need for a balance between human expertise and AI assistance, as well as the importance of ongoing training and skill development.

The aviation industry is regulated by bodies like the International Civil Aviation Organization, which enforce standards and practices, including UPRT. In contrast, the medical and counselling industries lack similar regulatory oversight, leading to concerns about AI-induced deskilling and loss of expertise. The software engineering industry, while facing similar challenges, can learn from aviation’s approach by prioritizing foundational skills, mandatory unassisted practice, and institutional oversight to ensure responsible AI adoption.

Subscribe for future articles on technology and leadership.

971.

extremely low frequencies

computer.rip/2026-05-09-extremely-low-frequencies.html

Submarine communication posed a significant challenge due to seawater’s interference with radio waves. Early attempts at direct conduction and floating antenna buoys were limited. The breakthrough came with long-wave radio, utilizing low frequencies and coil antennas, which proved effective for submarine communication and was adopted by the Navy.

The Navy’s VLF (Very Low Frequency) communication system, utilizing extremely long wavelengths, enabled reliable communication with submerged submarines. However, the limitations of VLF, including large antennas and narrow bandwidth, led to the exploration of even lower frequencies, specifically ELF (Extremely Low Frequency), for improved submarine communication during the Cold War. The development of ELF faced challenges, including secrecy, public opposition, and technical complexities, but it offered the potential for deeper submarine penetration and enhanced nuclear deterrence.

The Sanguine ELF station, a Cold War-era project, proposed a massive network of over 100 transmitting stations to ensure communication with submarines. Despite initial enthusiasm, the project faced public opposition, safety concerns, and budgetary constraints, leading to its cancellation. The Navy later pursued Project Seafarer, a scaled-down version, but it also met resistance and was ultimately abandoned.

The US Navy’s Project ELF, a system for communicating with submarines using extremely low-frequency radio waves, faced significant challenges. Despite overcoming political opposition and environmental concerns, the system was plagued by technical inefficiencies and limited capabilities. Ultimately, Project ELF was deemed obsolete and shut down after just 15 years of service, leaving behind a legacy of controversy and a notable episode of The X-Files.

Note: Writing Tools aren’t designed to work with this type of content.

970.

The Inference Shift

stratechery.com/2026/the-inference-shift?utm_source=flipboard&utm_medium=activitypub

Cerebras Systems, an AI chipmaker, is raising the price and size of its IPO due to high demand. While GPUs, particularly Nvidia’s, have dominated the AI compute landscape, Cerebras offers a different approach with its whole-wafer-as-chip design, providing immense compute power and high-speed memory access. This makes Cerebras particularly well-suited for inference workloads, though its high cost and limited memory capacity for larger models pose challenges.

The future of AI chips will be shaped by the distinction between “answer inference” and “agentic inference.” While answer inference, like coding, benefits from high-speed chips, agentic inference, which involves autonomous task completion, will prioritize memory capacity and cost over speed. This shift will lead to a more sophisticated memory hierarchy, potentially reducing the dominance of GPUs and favouring slower, cheaper memory types and CPUs.

Nvidia CEO Jensen Huang believes future computing speed-ups will come from systems innovation, not Moore’s Law. The implication is that existing computing power is sufficient, and the focus should be on optimizing its use.

969.

How the Bible Was Copied, Preserved, and Passed Down

www.jeremysarber.com/p/how-the-bible-was-copied-preserved

The transmission of Scripture, particularly the Old Testament, is examined. While the original writings (autographs) no longer exist, the Old Testament was transmitted through a structured process involving trained scribes. The Masoretes, Jewish scholars from the Middle Ages, played a pivotal role in standardizing and preserving the Hebrew Bible through meticulous copying and the development of vowel pointing.

The Masoretes developed a system of vowel points and marginal notes to preserve the accuracy of the Old Testament text. The discovery of the Dead Sea Scrolls, dating back to 200-100 BC, provided a valuable opportunity to compare the Masoretic Text with earlier manuscripts. The comparison revealed remarkable consistency, with only about one percent of words showing variation, most of which were minor and did not alter the core message of the text.

The Masoretic Text, the primary source for modern English translations of the Old Testament, is remarkably stable despite minor variations. While the Dead Sea Scrolls and other sources reveal some omissions and discrepancies, these account for only a small percentage of the text. The New Testament, transmitted under less controlled conditions, benefits from a vast and diverse manuscript tradition, providing a strong foundation for confidence in its text.

968.

What comes after web literacy?

blog.dougbelshaw.com/after-web-literacy

Digital literacy is evolving beyond web literacy, which focused on understanding HTML, CSS, and JavaScript. In the AI era, literacy involves understanding the layers of abstraction in digital systems, from direct manipulation of code to opaque AI models. The key is recognizing which layer to operate at and understanding the trade-offs between control, visibility, and speed.