What caught my eye this week.
Bad news! Not only are the machines now coming from our cushy brain-based desk jobs, but our best response will be to hug it out.
At least that’s one takeaway from a report in the Financial Times this week on what kinds of jobs have done well as workplaces have become ever more touchy-feely – and thus which will best survive any Artificial Intelligence takeover.
The FT article (no paywall) cites research showing that over the past 20 years:
…machines and global trade replaced rote tasks that could be coded and scripted, like punching holes in sheets of metal, routing telephone calls or transcribing doctor’s notes.
Work that was left catered to a narrow group of people with expertise and advanced training, such as doctors, software engineers or college professors, and armies of people who could do hands-on service work with little training, like manicurists, coffee baristas or bartenders.
This trend will continue as AI begins to climb the food chain. But the final outcome – as explored by the FT – remains an open question.
Will AI make our more mediocre workers more competent?
Or will it simply make more competent workers jobless?
Enter The Matrix
I’ve been including AI links in Weekend Reading for a couple of years now. Rarely to any comment from readers!
Yet I continue to feature them because – like the environmental issues – I think AI is sure to be pivotal in how our future prosperity plays out. For good or ill, and potentially overwhelming our personal financial plans.
The rapid advance of AI since 2016 had been a little side-interest for me, which I discussed elsewhere on the Web and with nerdy friends in real-life.
I’d been an optimist, albeit I used to tease my chums that it’d soon do them out of a coding job (whilst also simultaneously being far too optimistic about the imminent arrival of self-driving cars.)
But the arrival of ChatGPT was a step-change. AI risks now looked existential. Both at the highest level – the Terminator scenario – and at the more prosaic end, where it might just do us all out of gainful employment.
True, as the AI researchers have basically told us (see The Atlantic link below) there’s not much we can do about it anyway.
The Large Language Models driving today’s advances in AI may cap out soon due to energy constraints, or they may be the seeds of a super-intelligence. But nobody can stop progress.
What we must all appreciate though is that something is happening.
It’s not hype. Or at least for sure the spending isn’t.
Ex Machina
Anyone who was around in the 1990s will remember how business suddenly got religion at the end of that decade about the Internet.
This is now happening with AI:
Source: TKer
And it’s not only talk, there’s massive spending behind it:
Source: TKer
I’ve been playing with a theory that one reason the so-called ‘hyper-scalers’ – basically the FAANGs that don’t make cars, so Amazon, Google, Facebook et al – and other US tech giants are so profitable despite their size, continued growth, and 2022-2023 layoffs, is because they have been first to deploy AI in force.
If that’s true it could be an ominous sign for workers – but positive for productivity and profit margins.
Recent results from Facebook (aka Meta) put hole in this thesis, however. The spending and investment is there. But management couldn’t point to much in the way of a return. Except perhaps the renewed lethality of its ad-targeting algorithms, despite Apple and Google having crimped the use of cookies.
Blade stunner
For now the one company we can be sure is making unbelievable profits from AI is the chipmaker Nvidia:
Source: Axios
Which further begs the question of whether far from being overvalued, the US tech giants are still must-owns as AI rolls out across the corporate world.
If so, the silver lining to their dominance in the indices is most passive investors have a chunky exposure to them anyway. Global tracker ETFs are now about two-thirds in US stocks. And the US indices are heavily tech-orientated.
But should active investors try to up that allocation still further?
In thinking about this, it’s hard not to return to where I started: the Dotcom boom. Which of course ended in a bust.
John Reckenthaler of Morningstar had a similar thought. And so he went back to see what happened to a Dotcom enthusiast who went-all in on that tech boom in 1999.
Not surprisingly given the tech market meltdown that began scarcely 12 months later, the long-term results are not pretty. Bad, in fact, if you didn’t happen to buy and hold Amazon, as it was one of the few Dotcoms that ultimately delivered the goods.
Without Amazon you lagged the market, though you did beat inflation.
And yet the Internet has ended up all around us. It really did change our world.
Thematic investing is hard!
I wouldn’t want to be without exposure to tech stocks, given how everything is up in the air. Better I own the robots than someone else if they’re really coming for my job.
But beware being too human in your over-enthusiasm when it comes to your portfolio.
The game has barely begun and we don’t yet know who will win or lose. The Dotcom crash taught us that, at least.
Have a great weekend!
From Monevator
Does gold improve portfolio returns? – Monevator [Members]
How a mortgage hedges against inflation – Monevator
From the archive-ator: How gold is taxed – Monevator
News
Note: Some links are Google search results – in PC/desktop view click through to read the article. Try privacy/incognito mode to avoid cookies. Consider subscribing to sites you visit a lot.
UK inflation rate falls to lowest level in almost three years – BBC
Energy price cap will drop by 7% from July [to £1,568] – Ofgem
House prices are modestly rising, driven by 17% annual spike in new build values – T.I.M.
Hargreaves Lansdown rejects £4.7bn takeover approach – This Is Money
Judge: Craig Wright forged documents on ‘grand scale’ to support Bitcoin lie – Ars Technica
FCA boss threatens private equity with regulator clampdown – CityAM
Sunak says it’s 4th July, in the rain, against a subversive soundtrack [Iconic]– YouTube
Sir Jim Ratcliffe scolds Tories over handling of economy and immigration after Brexit – Sky
No, it’s not all the Tories’ fault… but Sunak and Hunt were too little, too late – Bloomberg
Products and services
Pay attention to catches as well as carrots when switching bank accounts – Guardian
Which energy firm offers the cheapest way to get a heat pump? – T.I.M.
How to get the most from second-hand charity shops – Which
Get £200 cashback with an Interactive Investor SIPP. New customers only. Minimum £15,000 account size. Terms apply – Interactive Investor
Nine out of ten savings accounts now beat inflation – This Is Money
Problems when transferring a cash ISA – Be Clever With Your Cash
Nationwide launches a trio of member deals worth up to £300 – Which
Transfer your ISA to InvestEngine by 31 May and you could get up to £2,500 as a cashback bonus (T&Cs apply. Capital at risk) – InvestEngine
Seven sneaky clauses in estate agent contracts that can cost you dear – This Is Money
Halifax Reward multiple account hack: worth up to £360 a year – Be Clever With Your Cash
Hidden homes in England and Wales for sale, in pictures – Guardian
Comment and opinion
No, the stock market is not rigged against the little guy – A.W.O.C.S.
The life hedge… – We’re Gonna Get Those Bastards
…is easier said than implemented [US, nerdy] – Random Roger
Checking out a fake Ray Dalio Instagram investing scam – Sherwood
An open letter to Vanguard’s new CEO – Echo Beach
If you look past the headlines, London is charging ahead – CityAM
Most of us have too much in bonds [Search result] – FT
Why we still believe in gold – Unherd
Are ‘fallen angel’ high-yield bonds the last free lunch in investing? – Morningstar
For love or money – Humble Dollar
Naughty corner: Active antics
Fund manager warns putting £20k in the US now will [possibly!] lose you almost £8k – Trustnet
A deep dive into US inflation, interest rates, and the US economy – Calafia Beach Pundit
A tool for testing investor confidence – Behavioural Investment
When to use covered call options – Fortunes & Frictions
Valuing Close Brothers after the dividend suspension – UK Dividend Stocks
Meme stock mania has entered its postmodern phase [I’m editorialising!] – Sherwood
Kindle book bargains
Bust?: Saving the Economy, Democracy, and Our Sanity by Robert Peston – £0.99 on Kindle
Number Go Up by Zeke Faux – £0.99 on Kindle
How to Own the World by Andrew Craig – £0.99 on Kindle
The Great Post Office Scandal by Nick Wallis – £0.99 on Kindle
Environmental factors
Taking the temperature of your green portfolio [Search result] – FT
The Himalayan village forced to relocate – BBC
‘Never-ending’ UK rain made 10 times more likely by climate crisis, study says – Guardian
So long triploids, hello creamy oysters – Hakai
Robot overlord roundup
We’ll need a universal basic income: AI ‘godfather’ – BBC
Google’s AI search results are already getting ads – The Verge
AI engineer pay hits $300,000 in the US – Sherwood
With the ScarJo rift, OpenAI just gave the entire game away – The Atlantic [h/t Abnormal Returns]
Perspective mini-special
How much is a memory worth? – Mike Troxell
We are all surrounded by immense wealth – Raptitude
How to blow up your portfolio in six minutes – A Teachable Moment
My death odyssey – Humble Dollar
Off our beat
The ultimate life coach – Mr Money Mustache
How to cultivate taste in the age of algorithms – Behavioural Scientist
Trump scams the people who trust him – Slow Boring
Buying London is grotesque TV, but it reflects the capital’s property market – Guardian
The algorithmic radicalisation of Taylor Swift – The Atlantic via MSN
And finally…
“Three simple rules – pay less, diversify more and be contrarian – will serve almost everyone well.”
– John Kay, The Long and the Short of It
Like these links? Subscribe to get them every Friday. Note this article includes affiliate links, such as from Amazon and Interactive Investor.







Reasoning models are dead. Long live reasoning models!:
https://open.substack.com/pub/artificialintelligencemadesimple/p/reasoning-models-are-a-dead-end-breakdowns
Shorter TL:DR to the TL:DR: it’s the wrong architecture, similar to Marcus’s disparaging of the pre-trained, backward propagation, weight adjustment, deep layered, token shape prediction paradigm and advocacy of hybridised neural nets and formal symbolic language overlay / syncretic (deterministic programme like and probabilistic input/output net nodes) approaches.
The longer TL:DR is from the actual TL:DR: “Reasoning models are a dead end because they try to compress a dynamic control process into static weights. Reasoning is not a pattern you can train; it is an algorithm you must run. When you train on reasoning traces, you only capture the final surviving path”.
My views, we need a better model reward function and ability to learn continuously to update a realistic world model.
Very bullish on TSMC. Massive upside to increased value chain capture on Nvidia chips:
https://open.substack.com/pub/shanakaanslemperera/p/tsmc-the-10-trillion-invisible-toll
But only today, I read elsewhere of Xi’s invasion/blockade plans….
Honoured by the link to this thread in the Monevator Weekend Reading Robot Overlord roundup 🙂 Thank you @TI.
The financing doom loop (and ‘dark fibre’ parallels: “the fiber optic buildout of 1999 where $500 billion of infrastructure investment produced 2.7 percent utilization and 12.8 percent default rates”):
https://open.substack.com/pub/shanakaanslemperera/p/the-stargate-deception
‘And yet it moves’: only today I got a LRM over half a dozen iterations to produce a 100% correct 6,000 word, original advice on a novel issue in less than a day including iteration, checking, polishing up. Realistically that would take 3 days from scratch without any assistance from ‘AI’ (or whatever it should be called). The hyperscalers probably won’t raise the $3 tn to $8 tn needed for the data centre buildout by 2028 to 2030 nor will OpenAI (or anyone get to 250 Gigawatts data centre electricity use) by 2032 (equal to India’s entire electricity consumption). But that doesn’t mean it wouldn’t be desirable to try go get there given what the technology has already demonstrated it can do. It might be merely mundane utility and in no way AGI itself or even just comparable to meaningful notions of AGI but I can easily see it replacing 50% or more of White Collar / Knowledge economy jobs eventually (maybe sooner rather than later) saving tens of trillions of dollars annually in payroll costs and feeding through directly to the bottom line in the P&L.
Sorry about the defective grammar in that one. The 10 minute edit function failed completely to appear upon posting. Omit the rogue parenthesis between “OpenAI” and “or anyone..” and between “use” and “by 2032”.
Crikey, this is a bit wild as a theory of how to get to AGI:
https://open.substack.com/pub/josecrespo/p/the-math-openai-doesnt-want-you-to
It might, or might not, be b*ts**t, but I do definitely agree with this bit:
“You cannot build AGI if you cannot see what your model is doing. You cannot deploy industrial AI if you cannot audit its reasoning. You cannot trust a system that cannot demonstrate coherence.”
Forward P/Es of the Mag’ 7 today:
https://substack.com/@dividendtalks/note/c-194489846?r=2kxl2k
Spot the odd one out there 😉
Musk is no Tony Stark IMHO, but he is a master salesman for his own shape shifting narrative – progressive promoting clean energy and BEVs, then FSD, then Robo Taxis, then a MAGA conservative pushing Optimus robotics, then SpaceX (which is maybe floating either later this year or next for an implied $1.5 tn market cap at a 68x forward sales, and ~100x trailing revenue, valuation), and now xAI.
Then again, Palantir’s on a more than 100x forward sales valuation. It could fall by nearly 60% to $74 and still be no less than the present Price/Sales ratio of Tesla, although, unlike TSLA, PLTR isn’t (yet) a Mag’ 7 stock.
Jevons’ Paradox on turbo boost.
Why ‘AI’ (and by extrapolation and extension AGI) will require more workers (and create more jobs than it destroys) even if (eventually) 5 white collar workers end up being able to do the work of 50 in the pre-LLM era (<2023):
https://open.substack.com/pub/ruben/p/replaced
Intriguing, and at least faintly plausible; but, to begin with, the job losses must surely first occur as the quickest way for shareholder value orientated businesses to boost the bottom line.
In the Industrial Revolution eventually more jobs were created and wealth ultimately cascade.
However, initially, huge numbers of people in crafts were put out of work (the Luddite movement was right in this respect), and people working in factories were paid far worse, and had even more appalling working and living conditions, than even the essentially peasant conditions which they or their parents had lived under before the onset of urbanisation.
The immediate historical precedent is not good.
Top ten AI stories of 2025:
https://open.substack.com/pub/generatives/p/10-ai-stories-that-shaped-2025
Personally, I’d have put DeepSeek higher up than number ten but there’s no question bubble fears and infrastructure spend and bottlenecks should be on the winner’s rostrum of the highest priority pieces.
Arguably, the continued improvements to LRMs/ inference from DeepSeek R1, through Grok 3, Gemini 2.5 Pro, Claude 4 Sonnet, on to GPT-5, then Claude 4.5 Sonnet, GPT-5.1, Gemini 3 Pro, Claude 4.5 Opus, to GPT-5.2 is the biggest development of all of the last year, given the scaling wall.
All these IPOs do seem a bit 1999ish.
https://open.substack.com/pub/aidisruption/p/mega-ipos-flood-2024-investors-cash
Too much supply coming on for the demand for new AI and frontier tech (SpaceX etc) shares? Time as always will tell.
The benefits of Prompt Engineering:
https://open.substack.com/pub/tylerfolkman/p/i-automated-my-own-ai-prompts-heres
A look ahead for the semis and revised hyperscaler Capex in the years ahead:
https://open.substack.com/pub/techfund/p/ai-and-semis-outlook-2026
This mentions token usage / demand growth coming in much stronger in 2025 than expected beforehand; but the sheer scale of growth in use and decline in prices is not covered but deserves another mention here: one model provider (can’t remember now if it’s OpenAI or Google for ChatGPT or Gemini respectively) has seen monthly token usage go from 9.7 trillion to over 1.3 quadrillion (1,300 trillion) (every thirty days!!), with the price per million tokens down 99.7%, all in 18 months (IIRC mid April 2024 to mid October 2025, I’d need to check though, this is from my memory). One token = 0.7 to 0.8 words.
A good all round interview today on all aspects of AI:
https://open.substack.com/pub/sophiecapital/p/inevitability-weekly-5
TPUs/Google, Apple and low end LLM apps, prospects for eventual ASI etc.
A engaging take on how AI will, over the next 50 years or so lever the top percentile of workers/ entrepreneurs and businesses:
https://open.substack.com/pub/generativevalue/p/2025-annual-letter
The more I read + learn the more convinced I am that the optimum portfolio is a modestly leveraged barbell with a lower risk underpinning of:
– some low volatility, high moat, low disruption risk surface and intersection of quality + value stocks with a degree of ‘inflation protection’ (especially consumer staples and infrastructure/ utilities);
– plenty of ‘risk off’ assets (gold and precious metals, long duration TIPS, global macro HF strategies);
– some cyclical broad commodity and deep value energy producer and junior miners (for dividends and optionality respectively) and opportunistic exposure to deeply discounted HY REITs and other CEFs/ITs;
– With a bleeding edge overlay, at the other end of the risk / opportunity barbell, of high growth, tech and tech disruption stocks from the mega caps right down the cap weight scale to (at the app/platform end) small/ micro/ nano cap (including fintech like operators);
– Plus some sort of juicing with a low single digit allocation of starting capital to a DCA leverage equity rotation strategy (which, if the worse happens, and it goes to zero, then still only burns through a few percent of the starting value of the portfolio), contributing less to the DCA when valuations are high and more in the recovery (e.g. back above 200 DSMA) after a crash (e.g. more than 20% drawdown on 52 week high).
AI IPO bonanza:
https://open.substack.com/pub/aisupremacy/p/ai-in-2025-recap-the-year-the-old-rules-ai-trends
Money left of the table, where’s the value left at now in the AI stack?:
https://open.substack.com/pub/randomwalkwithdata/p/assets-or-software-and-at-what-price
Depreciation coming (nice Capex chart):
https://substack.com/@therealrandomwalk/note/c-195510981?r=2kxl2k
Kernel optimisation:
https://open.substack.com/pub/importai/p/import-ai-439-ai-kernels-decentralized
Intriguing: “The most important takeaway is that decentralized training is growing quickly relative to frontier AI training, with decentralized training runs growing their compute by 20X a year versus 5X a year for frontier training runs. But the other important takeaway is that the sizes of these things are completely different – today’s decentralized training runs are still about 1000X smaller than frontier ones.”
$24 tn priced in, $11 tn left on table assuming 20% margin on $8 tn (to 2030?) cumulative Capex and a 22x PE:
https://substack.com/@therealrandomwalk/note/c-193161856?r=2kxl2k
So true.
https://substack.com/@mjreard/note/c-185945397?r=2kxl2k
Dwarkesh is my Go To AI Guru.
@Delta Hedge — The new Michael Cembalest / JP Morgan outlook for 2026 is here. Quite a bit of discussion about AI re: the market implications:
https://assets.jpmprivatebank.com/content/dam/jpm-pb-aem/global/en/documents/eotm/smothering-heights.pdf
Thanks for the link there @TI. Fantastic piece of research by JPM.
So we’re looking at the New Deal Public Works Administration, the Manhattan Project, the electrification of industry, the Interstate Highway System, the Apollo programme and the Broadband rollout all rolled into one (p.4)!
What could possibly go wrong?
I was struck by two points on p.9.
First the 1.5x-1.75x cost improvement and 1.25x-1.5x speed improvement of current frontier model assistance to existing human task experts.
Obviously, this seems cautious (and necessarily subjective, even though precisely quantified here) in itself, especially given that earlier models, like GPT-4o, are shown here as actually slowing down experts to half unassisted speed and doubling the task cost. That seems a very harsh assessment of earlier models’ IRL performance.
But what really strikes me here is the disconnect between the chart and reality.
The chart suggests to me organisations could shead 30% of their staff costs now for the same output, or, if the market they’re selling into has capacity, that they could increase output now by 50% for the same costs base.
Yet when I look left and right I see no sign of mass redundancy and no productivity boom.
It’s like what Alan Greenspan said about the Internet in the mid 1990s (1996 perhaps?), namely that the impact was showing up everywhere except in the productivity and growth data (and, of course, a lot of the corporate profits of that era turned out to be baloney, as we now know).
So what the heck is happening???
We’ve got a demonstrably very capable technology.
Aside from motors and electricity this is perhaps the most capable (and in many ways it is *the* most impressive) tech that I’ve ever come across.
And yet I see near zero sign of it (yet) changing businesses truly fundamentally, or in many cases even very much.
Are people just using this to work less to output the same with the same numbers and mix of staff and, therefore, with the same cost base?
Will business act like Tito’s Worker Cooperatives and basically let staff run the show for their benefit, or will shareholders at some point demand their pound of cost cutting flesh out of payroll?
Maybe it’ll all just take a long time.
Organisational and institutional culture bottlenecks are in their way as significant as energy, data and financing availability to the success of this endeavour.
The second point is on the question of data bottlenecks.
The footnotes on p. 9 reference a total of 4,750 tn tokens of data on the Internet, on video and in image libraries. That sounds loads but in the context of LLMs is it really?
Frontier models are (already) using up to 10exp26 (100 septillion) effective Floating Operations in training and seeing demand of over 1,300 tn tokens per month.
Accordingly, it seems quite blase for JPM to note that those 4,750 tn tokens available for training data (3,100 tn + 300 tn + 1,350 tn) will be enough for training until 2030.
Dave Friedman dissects Ed’s 19,000 word Ensh*tification of AI piece (previously linked to):
https://open.substack.com/pub/davefriedman/p/ai-capex-built-on-options-priced
Zvi’s on the case for that Philip Trammel and Dwarkesh Patel piece on ‘Capital in the 22nd century’ which I previously linked to and which was also separately linked to by @platformer in the most recent W/e reading MV links:
https://open.substack.com/pub/thezvi/p/dos-capita
This isn’t good for AI power consumption needs – the northern Virginia data centre electricity crunch:
https://open.substack.com/pub/privatemarketsnews/p/the-infrastructure-bottleneck-nobody
Or perhaps hydrogen fluoride is the ‘real’ AI bottleneck (for making silicon tetrafluoride for chip wafers):
https://open.substack.com/pub/shanakaanslemperera/p/the-invisible-chokepoint
Scaling walls, energy connection backlogs, power generation shortfalls, data availability insufficiency, financing issues, revenue shortfalls and operating losses. It’s not an obviously happy picture.
AI accounting controversy aplenty, although, TBF, it’s all out in the open and above board. But being technically legit in terms of GAAP, IFRS and the law doesn’t make it a good idea or investable:
https://open.substack.com/pub/shanakaanslemperera/p/the-35-trillion-ai-mirage-the-measurement
The periodic table od AI (a very useful conceptualisation of how it fits together):
https://youtu.be/ESBMgZHzfG0?si=zXyG7D_nSFZb42Y7
“The ratio of committed capital expenditure to current revenue is approximately 107 to 1. For comparison, the most capital-intensive industries in the traditional economy, such as semiconductor fabrication or liquefied natural gas terminals, typically operate with capital expenditure to revenue ratios of 3 to 4. OpenAI is operating at roughly 30 times the capital intensity of industries already considered at the extreme end of infrastructure investment”: Breaking it down:
https://open.substack.com/pub/shanakaanslemperera/p/the-ouroboros-protocol
Nvidia Rubin CPX “a chip that analysts estimate costs roughly 25% as much to manufacture as a standard Rubin R200 while delivering approximately 60% of the compute performance”:
https://open.substack.com/pub/shanakaanslemperera/p/the-architecture-of-dominance-nvidias
Missing the forest for the trees on an AI bubble? This was especially useful PoV on the glaring differences with the 1990s but the one obvious parallel, the last mile to the home (copper wire via dial up) for pre Broadband Internet on the one hand and the electricity bottleneck right now for data centres on the other:
https://youtu.be/Wcv0600V5q4?si=dOWPo9Xf1HCJ2Cjw
I play all my YT consumption at 2x and I suspect this guy sounds more convincing because of it 😉
A nice link to the 14 top AI resources (in the authors opinion) from 2025:
https://open.substack.com/pub/theaiopportunity/p/the-2025-ai-breakthroughs
Sorry these are going to be do briefly introduced today, pressures of work means I don’t have longer, unfortunately:
Software too cheap to meter:
https://open.substack.com/pub/amistrongeryet/p/software-too-cheap-to-meter
Said the same about nuclear power IIRC.
Trouble at mill with Meta and LLMs:
https://open.substack.com/pub/garymarcus/p/breaking-marcus-weighs-in-mostly
Don’t forget thay the semiconductors have to be packaged up to work:
https://open.substack.com/pub/marklapedus/p/issues-challenges-with-glass-substrates
Reinforcement Learning as part of Intelligence As A Service (‘IaaS’):
https://open.substack.com/pub/semianalysis/p/rl-environments-and-rl-for-science
Shifting the bottleneck from “insufficient compute” to “insufficient context”: “Context is the new bottleneck:
https://open.substack.com/pub/fundaai/p/why-dram-and-ssd-could-become-two
Vibe coding with Claude: 39 ‘free’ models to use, apparently…:
https://open.substack.com/pub/aidisruption/p/39-free-models-to-use-with-claude
Nividea Rubin GPU a ‘game changer’???:
https://open.substack.com/pub/aidisruption/p/ces-nvidias-rubin-cuts-ai-inference
But elsewhere I see/read (might have been on YT, can’t recall now) effective FLOP is up ~750x in 2 years but cache memory only up 1.7x. The train can only travel at the speed of the slowest carriage…
Interesting spin off from Google Ventures:
https://open.substack.com/pub/appeconomyinsights/p/how-motive-makes-money
Nice write up from Compounding Quality of a AI themed tailwind stock surfing the memory requirement wave:
https://open.substack.com/pub/qualitystocks/p/stock-of-the-week-micron-riding-the
Erratum: “do” in my intro to #432 should be “so”. Apologies. Typing on move again.
Interesting. Three dimensional wafer/chip stacking:
https://open.substack.com/pub/marklapedus/p/nhanced-expands-hybrid-bonding-capabilities
Surely there’s a thermodynamic/ cooling barrier to this though? Again, the weakest link in the process of delivering AI (i.e. rare earth mining and refining, electricity generation and network connection, memory usage, synthetic training data over reliance resulting in effective model collapse, vendor financing and private lending limits to data centre build out funding, heat dissipation / cooling, organisational and societal resistance to deployment of AI at scale and in depth, push back about removing expensive white collar human roles ‘from the loop’, psychological and training limitations on using models most effectively etc etc) is the all too difficult to bypass bottleneck.
Personally, I think the optimum way forward to attempting AGI is a mix of:
– Massive diversification of approaches (and therefore necessarily disinvestment away from LLMs) to research into a new neuro symbolic hybrid set of approaches.
– Double and triple down on algorithmic improvements over hardware. So much cheaper. The Chinese have this one right. It could be their winning card.
Robots on the move:
https://open.substack.com/pub/robopub/p/new-atlas-robot-heads-to-hyundai
Have we gotten DeepSeek and China completely and totally wrong????? 🙁
https://open.substack.com/pub/shanakaanslemperera/p/56-million-was-the-lie-589-billion
On the face of it bullish for US Picks and Shovels stocks in the data centre stack.
The buzz is shifting from GPUs to memory. SanDisk, Seagate. One share tipping site today: “In February 2025, Western Digital spun off Sandisk. Wall Street yawned. The market was brutal. Sandisk opened at $52.20—then promptly crashed 7% to close at $48.60. That $5.6B valuation at close represented a 65% haircut from the $16B Western Digital actually paid for the company in 2016. Wall Street thought flash memory was a dead commodity. They were wrong. 11 months later, Sandisk is a ~$50B juggernaut. The “easy” money in GPUs has been made. The real money is now in the second-order effects—the bottlenecks that hyperscalers can’t engineer around, memory storage is just one of them.”
Is Applied Materials (AMAT) “an unavoidable “complexity tax” on advanced chip production, or a cyclical capital equipment vendor nearing its peak?” You decide 😉
https://open.substack.com/pub/aryadeniz/p/deep-dive-applied-materials-amat
xAI going from $50 bn 2024 valuation to $200 bn on a $20 bn raise.
https://open.substack.com/pub/aidisruption/p/xai-raises-20b-more
With OpenAI looking to increase its next up round from a $500 bn to a $830 bn valuation this year and a float at $1.5 tn in 2027, SpaceX looking to IPO this year or next for $1.5 tn, Anthropic eying a $350 bn valuation round were headed into uncharted waters for private into public. Saudi Aramco can’t be considered a real comparator here IMO.
Grok over ChatGPT? xAI over OpenAI?
https://open.substack.com/pub/ruben/p/grok-chatgpt
AI at the science frontline and frontier:
https://open.substack.com/pub/sciencewtg/p/americas-genesis-mission-artificial
Not just shortages of General Purpose GPUs and AI ASICs but crisis brewing in CPU supply with TSMC only able to meet 80% of high end demand leading to likely 50% price increases:
https://open.substack.com/pub/fundaai/p/deepintc-agentic-ai-and-supply-bottlenecks
Very true. The advances may come in the boring industries and mundane if impressive tech has longer adoption curves than current expectations credit:
https://substack.com/@chocolatemilkcultleader/note/c-196658825?r=2kxl2k
Low down on Vera Rubin GPU arriving in the fall this year (45 degree centigrade warm water cooling 130 kwh per rack):
https://open.substack.com/pub/datacenterrichness/p/vera-rubin-enters-production-what
AI tools worth paying for?:
https://open.substack.com/pub/artificialintelligencemadesimple/p/the-ai-tools-im-actually-paying-for
In the author of the piece’s opinion ChatGPT and Claude trump Gemini on the top tier ($/£200 pcm)
This analysis resonates:
State of World in the era of ML and MAGA: “[De]Globalism, resource nationalism, remonetisation of silver, Gold in China, the horseshoe is real, and the need for a New New Deal as the machines drive inequality.”
AI Bulls and Bears: “The bulls are right about 2035. We’ll be 8-50x short on compute. Tokens will be the kWh of knowledge work. Current capex will look prescient. But the bears may be right about today, where our ballpark is we are investing ~12x what the companies are making.”
Capex utilisation: “Phase 1 (2025-2027): Oversupply. Build faster than demand. Utilisation collapses. Economic losses mount. This is now.
Phase 2 (2028-2030+): Thresholds cross. Demand explodes. Hit compute ceiling. This is what the bulls are modeling.
Both are true. They happen in sequence, not simultaneously.”
From:
https://www.campbellramble.ai/p/26-views-for-2026
Why isn’t AI taking all the (white collar jobs (already)?:
https://open.substack.com/pub/randomwalkwithdata/p/if-ai-is-taking-jobs-then-where-are
Answer perhaps: aging in place. Maybe we need more RE in the FIRE?
Everything on the new release of Claude 2.1.0:
https://open.substack.com/pub/thezvi/p/ai-150-while-claude-codes
When I read these model reviews I can’t help think, “where’s the moat?” Durable competitive advantage is the name of the game. No moat. No money.
SK Hynix, Micron and Samsung highlighted on this one as (mixing metaphors again) the ‘gatekeepers’ to solving the data centre storage ‘bottleneck’:
https://open.substack.com/pub/uncoveralpha/p/2026-ai-landscape-who-benefits-the
SK Hynix tipped by Woodford last year since when it’s gone from ~200,000 South Korean Won a share to ~750,000, on a 12x TTM P/E today, compared to just 6x a year ago. Say what you like about Neil (and no doubt WPCT/ SUPP/ INOV has to be one of the biggest % disasters in Investment Trust history) he still called this one right.
Alphabet overturns both Cinderella and prima donna OpenAI and once and has been innovator Apple:
https://open.substack.com/pub/aidisruption/p/ais-ultimate-cinderella-dethroning
Wouldn’t it be great, right after OpenAI floats, to have an ETF that was 50/50 long Alphabet and short OpenAI and long an equal mix basket of Volkswagen and Uber and short Tesla?
Still, Tesla has been the latter day equivalent of the short Japanese Gov Bonds widow maker trade, which didn’t work for well over 30 years (until it suddenly did in the last year or two).
The Apple/TSMC partnership (a deep dive):
https://open.substack.com/pub/semianalysis/p/apple-tsmc-the-partnership-that-built
The TSMC fundamentals over the last several years are amazing.
Very bearish of TSLA/FSD/Optimus:
https://open.substack.com/pub/neuralfoundry/p/teslas-robotaxioptimus-dreams-are
But since when has TSLA not traded as a narrative asset? Narrative momentum has been the TSLA playbook from day one. Take the point on Weymo LiDAR reliability being on a different level to anything Tesla fields. Of course, in the deep dive on Tesla in Tom Pueyo’s Uncharted Territories, he takes the opposite view.
It’s all about context window memory: storage stocks melt up:
https://open.substack.com/pub/amritaroy/p/memory-and-storage-stocks-are-melting
The social, economic and environmental cost of data centres:
https://youtu.be/NbOmVwT22i8?si=wJhoUzZE8pj90Bmo
Astonishing claims on I/O speed up and power requirements from fractal, distributed computing.
Is this a steaming load of BS or something worth investigating further???:
https://open.substack.com/pub/fractalcomputing/p/if-data-is-the-new-oil-what-if-prices
My suspicion antenna are screaming red alert, but it beggars belief there’s not more info on this given the magnitude of the claims made. Why are OpenAI et al not all over this? They’re invested in survival and winning, not propping up a failed data centre paradigm of bigger and more expensive is better. I’m struggling to see how *everyone* but the fractal computing people would be incentivised to downplay or ignore this. Maybe my framework (sociological motivation for sidelining) for looking at this question is wrong.
DeepSeek drop paper on R2 v R1 (in expanded R1 paper, from 22 to 86 pages of details):
https://open.substack.com/pub/aidisruption/p/deepseek-drops-full-r1-tech-report
32x cheaper per token in about a year.
AI = energy:
https://substack.com/@denisgorbunov/note/c-197401471?r=2kxl2k
Claude 2.1.0 ‘new’ agentic features:
https://open.substack.com/pub/aidisruption/p/claude-code-210-official-release
But Claude 4.5 and Claude Code apparently not to shabby either:
https://open.substack.com/pub/thezvi/p/claude-codes
One blogger on joining OpenAI:
https://open.substack.com/pub/generatives/p/on-joining-openai
Masayoshi Son, SoftBank, Switch and financing the AI boom:
https://open.substack.com/pub/netinterest/p/financing-the-ai-boom-2
“all the big software companies are [capital intensive] hardware companies now”
“in 2017, AI wasn’t LLMs. AI was artificial general intelligence (AGI). I think people didn’t think of LLMs as being AI back then. I mean, I grew up on science fiction books, and they predict a lot, but none of them pictured “AI” as something like a search-intensive chatbot”
“The secret to Google search was always how cheap it was, so that informational searches that were not monetizable (and make up 80% or more) did not pile up as losses for the company. I think this is the fundamental problem with generative AI and LLMs today—they are so expensive. It is hard to understand what the profit model is”
Michael Burry with Dwarkesh Patel on how the AI revolution has, and has not, lived up to expectations:
https://open.substack.com/pub/post/p/the-ai-revolution-is-here-will-the
“How long can Nvidia’s “insane demand” for GPUs last? And what happens to the company when the peak cash cow of their only viable revenue source is over? Nobody has given a good explanation. The semiconductor industry is cyclical by nature, Nvidia’s market cap rise has lifted the entire Semiconductor sector with it, but it’s boosted mainly on myths of scaling”:
https://open.substack.com/pub/futuresin/p/nvidias-2026-updates
Azeem on work after work and living in world of AI enabled automation:
https://open.substack.com/pub/exponentialview/p/artisan-premium-making-in-2026
Ohio gets it right? 6 bn cubic ft per day of new natural gas. Cheap electricity. Attract data centers. New construction jobs:
https://open.substack.com/pub/doomberg/p/intelligent-design
More critiques of the Philip Trammell and Dwarkesh Patel Capital in the 22nd Century ‘thesis’:
https://open.substack.com/pub/pricetheory/p/ai-labor-share
“AI 2027” now AGI 2034?? Some detailed and wild extrapolation here:
https://open.substack.com/pub/aifutures1/p/ai-futures-model-dec-2025-update
Playing around with their assumptions is yet wilder still:
https://www.aifuturesmodel.com/
The map is not the landscape
It (the AI Futures Model) is just so nuts here that it’s actually worth quoting Seth Llyod’s widely cited 1999 MIT paper on “The Ultimate Physical Limits of Computation”:
https://arxiv.org/abs/quant-ph/9908043
“A kilogram of ordinary matter holds on the order of 10exp25 nuclei. If a substantial fraction of these nuclei can be made to register a bit, then one can get quite close to the ultimate physical limit of memory without having to resort to thermo nuclear explosions. If, in addition, one uses the natural electromagnetic interactions between nuclei and electrons in the matter to perform logical operations, one is limited to a rate of approximately 10exp15 operations per bit per second, yielding an overall information processing rate of ≈ 10exp40 operations per second in ordinary matter. Although less than the ≈ 10exp51 operations per second in the ultimate laptop, the maximum information processing rate in ‘ordinary matter’ is still quite respectable.”
“The ‘ultimate laptop’ is a computer with a mass of one kilogram and a volume of one liter, operating at the fundamental limits of speed and memory capacity fixed by physics. The ultimate laptop performs 2mc2/π¯h = 5.4258 × 10exp50 logical operations per second on ≈ 10exp31 bits. Although its computational machinery is in fact in a highly specified physical
state with zero entropy, while it performs a computation that uses all its resources of energy and memory space it appears to an outside observer to be in a thermal state at ≈ 10exp9 degrees Kelvin. The ultimate laptop looks like a small piece of the Big Bang.”
Somehow, even with AI assisted accelerating progress (which, with respect to AI Futures, is one heck of a big assumption to make here) I can’t see us getting anywhere even close to just the room temperature version of the ‘ultimate’ computer, at ~10exp40 FLOPS/CPS per kg / litre. If so, this then, in turn, surely rules out the AI Futures main projection on grounds of practical if not physical credibility (AFAICT).
Worth emphasising that (per the footnotes to the linked AI Futures’ Substack post at #475) back in 2023, not long after when GPT3 dropped, the median AI ‘expert’ prediction for AGI (assuming it happens) was either 2047 or 2116 (2047 for “unaided machines outperforming humans in every possible task”, and 2116 for “all human occupations becoming fully automata).
I give Francis Galton’s wisdom of crowds (here a market in expert projections) moderately higher credence as a prior than AI Futures appears to.
Some ordinarily, and all too plausibly, bad AI futures (no apocalypse/ no economic collapse, just sucks in a regular, if still profoundly depressing, way; think idiotocracy):
https://open.substack.com/pub/bloodinthemachine/p/four-bad-ai-futures-arrived-this
Nvidia v AMD = Apple iOS v Android?
https://open.substack.com/pub/techfund/p/nvidia-vs-amd-apple-vs-android
Note parameter count growing at a trend of 10x p.a., test time scaling reasoning token use increasing by 5x p.a., and Rubin will give 5x the performance of Blackwell, despite the number of transistors only increasing by 60%.
There’s nothing new under the sun whether (as Jesse Livermore, writing as Edwin Lefèvre, observed in his autobographical ‘Reminiscences of a Stock Operator’) in the arena of investing or, for that matter, in computer science and AI.
Just as the Panic of 1907 and the Great Crash of 1929 share many features, will we look back upon the later 2020s as being a 1974 like (first) outbreak of AI winter moment?:
https://youtu.be/hYnadoy8aQE?si=IWMKQk8zRPH4YGad
The first explicitly articulated AI bubble ran from the coining of the term as a grant research marketing tool at the Dartmouth conference of 1956; but it took until long after the General Problem Solver of 1957 and the Perceptron of 1958 failed (both already obviously so by the time that Eliza debuted in 1964, in a misfired attempt to show the shallowness of the field) for the UK to finally pull the funding plug with the Lillil report in 1973 and then DAPRA to do likewise in the US the following year.
How long might it take this time around? If the past is a prelude (which it rarely is), and taking the first LLMs as 2017 and comparable, in relative importance, to say the Perceptron of 1958; then we’d be looking at 2032/33 for things to fall apart.
But the cycle of hype and the vortex of VC and Private Lending funding is much greater now than then. That suggests to me an acceleration of developments and cycles now relative to the field of ‘AI’ in the 1960s.
A reminder from a year ago of why AGI isn’t happening anytime soon:
https://youtu.be/By00CdIBqmo?si=otAmZIwPlfYzzqsj
But, as the previously posted YT piece ends by noting, by 1980 an AI spring had begun (after the onset of AI winter in 1973/74), just as the crash of dot.com in 2000-2002 led, from the early 2010s, to a spring and ultimately long summer for what became FAANG, the Magnificent Seven and the Hyperscalers.
This is the US orientated bull case for LLMs *already* creating a productivity boom extracted from the start of one of the many (many) investing related emails which I get now each day (I can’t share a link as there’s no associated Substack etc to it to share):
“Productivity is eating inflation for lunch
“Experts” told us what would happen. All the tariffs, deportations, pressure for lower interest rates, pointed to a guaranteed crash. They were wrong…Trade Deficit: Dropped to $29.4B in October, down from $136B in March. That is the lowest level since 2009. GDP Growth: The Atlanta Fed projects 5.1% for Q4. The historical average since 1947 is only 3.2%. We are growing nearly twice as fast as “normal.” Inflation: It isn’t spiking. It is dropping. Down to 1.9% (Truflation). How does that happen? Productivity. It’s up nearly 5%. Companies are figuring it out. They’re using AI. They’re navigating deregulation. They are producing more profit with fewer people.”
I have to say, and against my sceptical (cynical?), miserly curmudgeonly instincts, this is (I reluctantly concede) actually quite plausible. I’m using enterprise Copilot at work. Copilot is not exactly highly regarded by premium paid tiers LLM ‘super users’. But I’m impressed nonetheless. Surprisingly impressed.
On appropriate tasks (some of which are very highly complex and novel, multi layered questions), and with my guiding, shaping, evaluating and amending (where needed) outputs; I reckon (even with all that human domain ‘expert’ input) that, overall, I’m still getting at least (a conservatively estimated) composite 2x to 3x speed up (i.e. 3 days’ of work compression into 1; although, sadly, this just leaves more time for bureaucracy to fill the vacuum productivity creates).
And, given the plausibility from direct product experience (not the free chatbots, but the professional enterprise suites); can we really say that the firms behind the hiring freezes are just using AI as an excuse (and that it’s really a recession just around the corner being anticipated by forward look through, almost prescient, HR teams in those companies?)
Isn’t it credible (and parsimonious) to suggest that top, paid up tiers, frontier models, which were already scoring 126 to 148 points (most commonly 130, the threshold of very superior into genius) on various recognised, respected and well established IQ tests (and going up by 2.5 points a month) some months ago, might (increasingly) be displacing the need to take on inexperienced junior staff?
Doesn’t seem at all far fetched to me, even as the econ bloggers on Substack grasp at every explanation under the sun:
https://open.substack.com/pub/apricitas/p/the-no-hire-economy
On “What Happens When Superhuman AIs Compete for Control?” 🙁
https://open.substack.com/pub/aifutures1/p/what-happens-when-superhuman-ais
It’s either 6,000 words of ‘sci fi’ ‘faction’ presented both as a warning scenario and as an aid to prudent risk management,… or we’ve had it….
I can’t see how 2027 is still on the table as a serious timeline for AGI (or even ‘just’ for the timeline for fully Automated Coders) given the evident slow down in the rates of improvement of GPT4 to 4.5, GPT4.5 to 5, GPT5 to 5.1, GPT5.1 to 5.2 and from Gemini 2.5 to 3; as compared to the rate and the overall magnitude of incremental improvements in utility from going from GPT3 to GPT4.
This guy’s an option trader but even he’s seeing a massive impact now from ML/ LRMs/ LLMs in his work:
https://open.substack.com/pub/moontower/p/work-is-going-to-feel-very-different
You can’t just dismiss this phenomenon out of hand.
It can be a bubble (of one or more sorts) and still be transformative.
Many things can be true at once.
Building moats at the AI application layer:
https://open.substack.com/pub/artificialintelligencemadesimple/p/how-openai-builds-amazing-products
New 6,000 word breakdown of the whole AI stack linked to here:
https://substack.com/@scstrategist/note/c-198399762?r=2kxl2k
Last quarter hyperscaler Capex accelerated to $142 bn, over 3 months….! Think of that 91 days, $142 bn for one type of fixed investment by one small group of massive companies. How long before data centres are bigger than defense in the US (although, that said, DJT now claims to want to boost ‘Department of War’ expenditure from $950 bn this year to $1.5 tn the next – not be taken seriously I hope).
To quote the pitch for attention in the preface note: “every bottleneck. Power generation. Thermal management. Electrical infrastructure. Connectivity. Physical safety”
Despite incremental increasing utility, core failure modes (most notably non specificity of output to instructions, of which hallucinations were the most obvious early examples) still seem to be ‘baked in’ to the current diffusion and transformer architectures, even after nearly a decade:
https://youtu.be/bv19nXfb0bc?si=OjDUYSY5IdLaoZpC
Not promising for AGI.
View from Richard Murphy on the Left:
https://youtu.be/68iTH6mX-0s?si=NI0T6F50ZXMty6ze
Agree that financing arrangements lead to shadow bank risk and consequential contagion risk.
Also agree with him on job destruction.
The inflation effect, however, is exceedingly unclear.
Disinflationary. Reinflationary. Who can tell? I can’t. I don’t think Murphy can either, though he doesn’t realise it.
On the face if it, where ML / LRMs / automation substitutes (which various degrees of imperfection) for (cognitive) labour then it would seem disinflationary/ deflationary.
On the other hand, it is true that the cost of chips and energy will probably go up, and that’s prima facie inflationary.
Not sure that there’s any evidence in the UK though that data centres are going to affect water demand or pricing.
Might well be different for a data centre in Arizona of course.
Any higher productivity from ML etc could, depending on the context and scenario, be either disinflationary or reinflationary (or cancel out neutral).
Again, no one can know.
Agree with Murphy that the BoE will probably call it all wrong and that,as always, politicians are clueless/asleep at the wheel on this.
As you would expect a precautionary perspective in this one.
Finally, saying the silent part out loud:
https://open.substack.com/pub/aisupremacy/p/generative-ai-might-be-hurting-the-labor-market-future-of-jobs
Yes, it is, I think, likely that LLM/LRMs, and wider neural net ML applications, are now starting to cause measurable, and indeed significant, permanent job losses.
Kinda obvious that it probably would, sooner or later; and so it seems likely that it is now actually showing up.
To paraphrase and mangle misquote Sherlock Holmes, when you’ve eliminated everything else what remains is truth.
If this is indeed ‘AI’ related job losses (especially given the strong GDP prints State side), then the issue becomes not if but how far, and for how long, does this go on for???
Are we about to be entering a world transformed (and not necessarily for the better) job wise???
Surface success versus deep realties and sfe versus innovative in China’s AI execution phase:
https://open.substack.com/pub/hellochinatech/p/china-ai-fast-follower-trap
Although the Chinese seem to be doing pretty damm well on the innovation front to me.
They don’t exactly feel like Kodak in 1975, developing, but not then going on and commercially pursuing, a digital camera:
https://open.substack.com/pub/robopub/p/world-no-1-chinese-firm-open-sources
And per my #490-491 comments above, the US job situation is probably worse than feared:
https://open.substack.com/pub/shanakaanslemperera/p/the-phantom-jobs-thesis-americas
The AI blizzard is coming in thick and fast to my investment related inbox. I’ll quick fire the next nine in below in the interests of economy.
Here’s the first, starting with Mr Musk’s very bold claims:
https://open.substack.com/pub/aidisruption/p/musks-3-hour-bombshell-interview
Evolve AI agents by getting them to compete:
https://open.substack.com/pub/importai/p/import-ai-440-red-queen-ai-ai-regulating
“A one-shot (AI) warrior defeats 1.7% of human warriors. Best-of-N sampling produces a set of warriors that can defeat 22.1% of human warriors Evolutionary optimisation against each human warrior generates a specialised warrior for every opponent; this set can collectively defeat 89.1% of human warriors and defeat or tie 96.3%.”
For powering data centres China’s a hundred years ahead of the US, apparently…
https://open.substack.com/pub/exponentialview/p/data-to-start-your-week-26-01-12
How one super user uses Claude:
https://open.substack.com/pub/aidisruption/p/my-2025-claude-code-mantra-simplify
Keep it simple.
AI capex is not slowing down…
https://open.substack.com/pub/crackthemarket/p/the-crack-the-market-signal-1
We’re on motorcycle and either about to career off a cliff, or head up a jump ramp to leap over half a dozen monster trucks!
As you might expect, Gary thinks it’s the cliff and not the jump ramp ahead:
https://open.substack.com/pub/garymarcus/p/lets-be-honest-generative-ai-isnt
I’m personally really sceptical of these surveys which so ‘no’ productivity boost. Have they actually*used* this tech? What are they measuring? And how? Doesn’t ring true with IRL.
Software AI was just the warm up act:
https://open.substack.com/pub/theaiopportunity/p/robotics-will-be-the-next-decades
Chiplets, smart rings, HBM4, silicon photonics, IEDM papers:
https://open.substack.com/pub/marklapedus/p/the-latest-news-in-ic-packaging-and-ffb
The Rigetti twelve by nine qubit quantum computer is not necessarily a massive breakthrough. If all nine qubits are fully error corrected logical qubits, then that’s a maximum of a 512 fold (i.e. 2exp9) speed up from quantum parellism per operation per qubit over a classical logic gate.
On chiplets, Cadence scores very highly on quality growth metrics.
And, lastly, this from the aforesaid commentary (from Money Machine Newsletter) on the effect of AI on jobs and productivity:
“There’s an old habit in the investing world. It goes like this…If companies aren’t hiring, the economy must be crashing. It’s time to break that habit. Everyone spent the last year panicking about a “cooling labor market.” They looked at the slowing hiring rates and flattened work hours and predicted the worst. They were wrong. While mainstream media is obsessing over headcount, real GDP accelerated to a 4.3% annualized pace. Productivity surged at nearly 5%. This is what efficiency looks like. We are seeing an economy that generates more output with fewer workers. That isn’t a recession signal. That is the holy grail of business. It’s a productivity-driven expansion. For corporate America, this is the perfect setup: Unit labor costs go down. Inflation pressure eases. Profit margins expand. Companies are realizing they can grow earnings without aggressive hiring or raising prices. They are doing more with what they have. That’s why earnings and revenues are hitting records even while payroll gains moderate.”
And on the current viability of the picks, shovels and power producers as investment themes:
“AI’s big checks just cleared, expect more zeros in the market. Microsoft spent ~$9B on IREN—a stock we called out early in the year, when it was on no one’s radar.
Amazon tossed in ~$5B for CIFR. Yesterday, these were bitcoin miners. Today, they’re AI power plants. AI’s no longer about compute—it’s about capacity.
Each new data center sucks down enough juice for 100,000 homes. Satya Nadella (Microsoft CEO) nailed it: “We have the compute. We just don’t have the power to plug it into.” You can buy GPUs. You can’t buy electricity. At least not fast enough. So the leverage quietly shifted—from silicon to supply. How long will this last, who knows. Could this change? Absolutely. But not at this very moment”.
And a couple of stragglers which I ‘missed off the list’, the first on Claude Code’s agentic qualities of being a “repeatable loop that can read context, plan, take actions, verify results, and keep going….a general execution interface for knowledge work”:
https://open.substack.com/pub/neuralfoundry/p/claude-code-is-taking-over-everything
And on Apple choosing Google Gemini to power Siri:
https://open.substack.com/pub/fundaai/p/researchgoog-google-gemini-may-become
Another win for Alphabet with TPUs, Gemini 3.0, AI search summaries, Waymo tie in, Google cloud, DeepMind, AlphaFold, and all the rest. A good year for Google.
@Delta Hedge — Evening! You write:
My experience, reading, and take so far remains… uncertain. The only people I know who are persistently reporting productivity boosts from AI without it damaging their output are programmers. I know plenty of others who are using AI but who I feel are basically swapping one set of issues for another (e.g. the AI produces something, but then they spend loads of time fixing it, or not thinking harder about a cleaner neater solution that would have saved them more time on doing it) or their output is suffering (here I’m thinking mostly of writers).
With that said I think AI is having an impact at the margin. Even as an extra good search tool it’s helpful. It must be increasing output for, say, people who have to produce a lot of rote copy for product descriptions. (I had a story last week the links about the death of the copywriter).
Three years in I don’t know anyone who has lost their job to AI, seen anyone lose their job to AI, or faced that threat very viscerally in reality (versus the potential).
This is not to say there isn’t disruption. Blogs are certainly being disrupted away by AI! But I’m not sure substitution is the same as a productivity boost? (Well I’m sure it’s not but you take my point).
TLDR something is happening but it isn’t (yet) as big as the hullabaloo, IMHO.
Evening @TI.
Direct knowledge is a dangerous thing given (as Morgan Hounsel reminds us) that we each sample just 0.000000001% (1 exp minus 11) of the lived experiences of the 117 bn odd who’ve ever lived, but….that said: even though it’s clearly not at all what any of us ever though AGI would be, and, on any reasonable and fair minded view, it isn’t (at least yet) anywhere even remotely near to what a truly generalisable AI should be capable of; the top paid tier frontier LLMs still deserve more credit for what they demonstrably *can* do than what they’re currently getting from most people most of the time.
I’m using mid tier (£24 pcm) Copilot to do my self assessment this month. It seems to be getting it right (have to make sure all the docs are OCR’d first of course).
Just on educational YouTube today, ChatGPT 5.1 scores 88% on 2nd undergrad year quantum physics paper in just 30 seconds for a 3 hour paper:
https://youtu.be/JcQPAZP7-sE?si=vTd02DjjoTpLWeIh
How is it possible that this tech’s not having a positive effect on productivity? It beggars belief.
I’m not saying (unlike Elon) that this is the singularity. It certainly ain’t that.
But it is something, and probably something really quite important.
I am sure many companies will be able to downsize payroll and boost the bottom line P&L very soon. Whether they’ve the gumption and the brass neck to do so might turn out to be a different matter.
Of course, the socioeconomic and public finance effects (less payroll tax) *if* we see mass ‘head shed’ could be devastating if it’s handled badly.
But that doesn’t mean that the rate of profit in GDP can’t go up a lot even as a result of only mundane utility models. The aggregate profit margin is probably no longer a mean reverting series.
Of course, none of this means LRMs/LLMs can find out anything truly new (though they might, see AlphaFold and DeepMind). And without genuine innovation it’s an open question whether you can get long term economic growth out of ML. But you probably can reduce the labour share of output, make the Gini coefficient worse, make a few people very rich indeed, and raise the IRR for shareholders.
And of course the first to ‘lose’ their job will be the young would be starters who don’t ever get taken on to begin with because the cheapest lay off is the one you don’t make because you freeze recruitment. I think this is demonstrably what we’re starting to see in graduate employment stats.
My cohort (the fifty plus brigade) will be in the firing line eventually I guess, but at least I’m expensive to make redundant (21 months’ pay).
@Delta Hedge — They can definitely do *something*. The question is (a) how useful is much of what they do to someone who couldn’t do it already (b) or who doesn’t have to double-check it (c) or who couldn’t outsource it cheaply.
I appreciate your self-assessment tax return was (presumably) just an off-the-cuff example, but you’ll have to check this for errors, you can presumably do it yourself, and my accountant can do mine so easily he chucks it in for free with my limited company accounting. I imagine it takes him about 2-3 minutes, and most of that will be keying in EIS scheme numbers and pressing ‘go’. 😉
Again, I am with you in part. I think it’s clear they will boost productivity to some extent. My total late night gut sense guess is perhaps they’ll make everything 5-10% efficient, averaged across all industry, all jobs, all tasks. That’s (a) very meaningful (b) commercially valuable (c) probably not what’s priced in.
This is not to say we won’t see some kind of breakthrough or that more specialised/niche trained models/instances/apps that start to cut into this and that area.
But (cautiously) I think the revolution is off the table for now, most likely.
Again, don’t get me wrong. I said to my most AI literate friend last night I think there’s still a 5-10% chance (again a total guess) that we could still be at the start of something existentially threatening and humanity-changing. Not because I can see it, but because I can still gasp in awe at a chatbot’s output and wave my hands over my imagination. And I couldn’t do this five years ago when this was still basically just ‘big data’. So it has to be given some kind of trajectory to endgame probability. Something *has* changed.
But for context he thinks on this tech there’s basically no chance. And he knows far more about it than I do.
Of course every word I’ve written above could look incredibly dumb in ten years. Again I don’t dispute something in is potentially in play. (And a 5-10% boost to global GDP is anyway meaningful, depending on the time frames).
“Interesting times” as they say.
p.s. Apols for the typos if you read the first version of this comment over email!
Interesting pointa.
On your a)., if the task is outside the circle of competence/ comfort zone of the person concerned then they’d have to outsource to a professional at much greater cost than a paid up LRM (yet alone a free one).
If the task was something the person could do then the LRM can do it so much quicker even with checking.
Any 130+ IQ undergrad physics savant could get 88% on a 2nd year paper on quantum mechanics but (remembering that the paper used was one which had never been made public, and so couldn’t have been within the model’s pre training set).
But no human could answer the paper at such a solid 1st class (>70%) degree level in matter of seconds (versus the 3 hours allotted the exam).
There must (surely?) be a huge (actual / potential?) speed up for tasks here.
*If* workflows can really be joined up / managed effectively by agentic automation imminently, then I’m finding it hard to see how there’s not then going to be some possibly really quite big productivity boost.
That doesn’t necessarily mean any more revenue/ sales for firms, but it would be disruptive, and it might be important.
On b)., these models are, in practice, becoming a heck of a lot more reliable, accurate and useful over time (so far).
It’s true that they still screw up some tasks and for really compute intense things like image creation they still can mangle the instructions.
But I can’t help feeling like we’ve passed the iPhone moment.
We’re not I think anymore in the Palmtop Palm Pilot / PDA / Apple Newton phase of a ‘nice idea’ but it’s clunky, hard to use, not that useful (of the GPT3 era).
Whether things can continue to improve given scaling walls, energy constraints (especially getting new generation onto the network), financing woes (Oracle CDS spreads) etc is an open question; although, as a general steer to action, I tend not to bet too heavily against progress.
On c). the outsourcing question is fascinating.
It might make outsourcing even cheaper and more useful / reliable because the offshore centre is fully utilising LRMs etc.
On the other hand, fully agentic models (if they do arrive soon, as promised) make automated reshoring / ‘inhousing’ much more economically viable too.
I suppose a virtual task doesn’t really have a location.
Although the comparison is inexact (to say the least), I’m slightly reminded of 1989, in the aftermath of Tiananmen, when my history teacher provoked derision for saying (IIRC, it’s stuck with me) that within 40 years the world would be looking to copy China’s economic model (state capitalism / market socialism). It seemed ridiculous then, with China’s midget peasant based economy, and the communist world then teetering on thr brink of collapse. Less than 37 years on and China leads the world on robotics, manufacturing, exports, high speed rail, fastest urbanisation in history, greatest and swiftest lift of people out of poverty in history, largest PPP GDP in world etc.
Maybe in the early 2060s we will look back on the shaky start to the ML era in 2022/23 in a similar way.
[Extra para break accidentally added in editing between my substantive para’s 3 and 4 above. Apologies for that and for a couple of typos (“thr” etc). Hopefully it all still makes sense.]
@Delta Hedge — Cheers for further thoughts. Again, I am not saying they are not doing *anything*. And a 4% bump to productivity would be very meaningful.
But there is absolutely no way that using a chatbot is like hiring a Phd level physicist or anything close, except perhaps if the role is to answer questions directed at PhD-level physicists.
I agree the errors are down. But they are still there and they can be howlers. Moreover the bots still lack agency, and IMHO it’s going to be hard to translate LLMs into autonomous units that can navigate even a digital space *more cheaply and effectively* than a human.
But will this technology infiltrate all aspects of working/other life? It looks that way. Hence I’m coming down in the ‘human+’ camp at the moment.
In contrast three years ago I thought the huge job losses looked much more possible. But right now they lack the smell test. If people could buy AI employees for £50/£100 a month with the potency of a PhD-level graduate we’d be seeing vast demand and collapsing employment. We’re not currently seeing either. Again we’re seeing *something* but not that.
Time will tell, and another six months will be another ten years in this field!
On the other hand, something to support your view that I just read:
https://www.businessinsider.com/mckinsey-workforce-ai-agents-consulting-industry-bob-sternfels-2026-1
Just as a follow-up, look at this thread on X on using ChatGPT to book flights, which allegedly ‘breaks’ SkyScanner etc:
https://x.com/riyazmd774/status/2010648637622370752
Yes, he apparently gets results. But clearly probing the Chatbot is at least as much effort as going to a booking website and arguably much more.
Now you could argue this will all be wrapped up in a thin-app that replicates the booking app functionality without the need for prompts.
But (a) surely that would benefit the incumbents because they already have brand, distribution etc and (b) likely it would be more expensive than bespoke software.
There’s currently a lot of SaaS under a cloud because of these sorts of fears. But I do wonder… 🙂
Again, there will *definitely* be disruption. (I don’t want my gentle pushback to go down in lore as ‘it’s all nonsense and a fugazi’, because I do believe it’s a disruptive and transformative technology. The question is to what and to what degree…)
All agreed.
It’s a heck of a lot of work / iterating and what not to fully use these tools (and perhaps most aren’t) but, by God, for the right task, when they do work well, then they’re (or at least can be) downright impressive.
And I don’t impress easily on tech. I stayed well clear of dot.com companies and though social media in the aughts to be a profitless scam. More fool me! 😉
I think that the biggest issue is that it’s not obvious (at least me) that it’s a case of the ‘West and then the Rest’ on AI.
China may be a lot closer to realising the benefits of this and interrelated automation tech than the US realises or understands.
I’ll post some links in a second from the last couple of days on this. I mention it here in case it gets auto filtered into moderation (as more than one hypertext per comment / post).
At the risk of sounding like the ‘China Cassandra’ here (“they’re pulling ahead, we’re doomed I tell you! 😉 ), and further to my last point above in my previous post, think there’s good reason to be concerned that the US’s ostensible head start in raw compute is not, in practice, the durable advantage which it’s typically made out to be:
China’s or America’s lead (from YT yesterday, with essay referenced linked immediately below it)?:
https://youtu.be/KKtbq-w4mzg?si=OVrecdf9fOCJniJe
https://kaskaziconsulting.squarespace.com/publications/my-essay-entitled-no-more-moore-so-what-then-for-microchips-nbspand-for-china
The Stargate mythos (also from YT yesterday):
https://youtu.be/K86KWa71aOc?si=r7-GDymRddLshfBC
And no one can verify the US data centre deals anyway so why is the Chinese AI effort said to be opaque?:
https://open.substack.com/pub/davefriedman/p/the-ai-data-center-deals-that-no
And the US tech titans approach is as much theological (a digital God in a desert data centre) as the Chinese is pragmatic and technical:
https://open.substack.com/pub/shanakaanslemperera/p/the-gods-are-being-built-in-the-desert
Meanwhile, and linked to this concern, from Substack today:
DeepSeek is boosting reasoning with ‘sparse compute’:
https://open.substack.com/pub/aidisruption/p/liang-wenfeng-open-sources-memorydeepseek
And how Baidu is optimising the Chinese approach to AI:
https://open.substack.com/pub/hellochinatech/p/baidu-spinoff-valuation-trap