What caught my eye this week.
Bad news! Not only are the machines now coming from our cushy brain-based desk jobs, but our best response will be to hug it out.
At least that’s one takeaway from a report in the Financial Times this week on what kinds of jobs have done well as workplaces have become ever more touchy-feely – and thus which will best survive any Artificial Intelligence takeover.
The FT article (no paywall) cites research showing that over the past 20 years:
…machines and global trade replaced rote tasks that could be coded and scripted, like punching holes in sheets of metal, routing telephone calls or transcribing doctor’s notes.
Work that was left catered to a narrow group of people with expertise and advanced training, such as doctors, software engineers or college professors, and armies of people who could do hands-on service work with little training, like manicurists, coffee baristas or bartenders.
This trend will continue as AI begins to climb the food chain. But the final outcome – as explored by the FT – remains an open question.
Will AI make our more mediocre workers more competent?
Or will it simply make more competent workers jobless?
Enter The Matrix
I’ve been including AI links in Weekend Reading for a couple of years now. Rarely to any comment from readers!
Yet I continue to feature them because – like the environmental issues – I think AI is sure to be pivotal in how our future prosperity plays out. For good or ill, and potentially overwhelming our personal financial plans.
The rapid advance of AI since 2016 had been a little side-interest for me, which I discussed elsewhere on the Web and with nerdy friends in real-life.
I’d been an optimist, albeit I used to tease my chums that it’d soon do them out of a coding job (whilst also simultaneously being far too optimistic about the imminent arrival of self-driving cars.)
But the arrival of ChatGPT was a step-change. AI risks now looked existential. Both at the highest level – the Terminator scenario – and at the more prosaic end, where it might just do us all out of gainful employment.
True, as the AI researchers have basically told us (see The Atlantic link below) there’s not much we can do about it anyway.
The Large Language Models driving today’s advances in AI may cap out soon due to energy constraints, or they may be the seeds of a super-intelligence. But nobody can stop progress.
What we must all appreciate though is that something is happening.
It’s not hype. Or at least for sure the spending isn’t.
Ex Machina
Anyone who was around in the 1990s will remember how business suddenly got religion at the end of that decade about the Internet.
This is now happening with AI:
Source: TKer
And it’s not only talk, there’s massive spending behind it:
Source: TKer
I’ve been playing with a theory that one reason the so-called ‘hyper-scalers’ – basically the FAANGs that don’t make cars, so Amazon, Google, Facebook et al – and other US tech giants are so profitable despite their size, continued growth, and 2022-2023 layoffs, is because they have been first to deploy AI in force.
If that’s true it could be an ominous sign for workers – but positive for productivity and profit margins.
Recent results from Facebook (aka Meta) put hole in this thesis, however. The spending and investment is there. But management couldn’t point to much in the way of a return. Except perhaps the renewed lethality of its ad-targeting algorithms, despite Apple and Google having crimped the use of cookies.
Blade stunner
For now the one company we can be sure is making unbelievable profits from AI is the chipmaker Nvidia:
Source: Axios
Which further begs the question of whether far from being overvalued, the US tech giants are still must-owns as AI rolls out across the corporate world.
If so, the silver lining to their dominance in the indices is most passive investors have a chunky exposure to them anyway. Global tracker ETFs are now about two-thirds in US stocks. And the US indices are heavily tech-orientated.
But should active investors try to up that allocation still further?
In thinking about this, it’s hard not to return to where I started: the Dotcom boom. Which of course ended in a bust.
John Reckenthaler of Morningstar had a similar thought. And so he went back to see what happened to a Dotcom enthusiast who went-all in on that tech boom in 1999.
Not surprisingly given the tech market meltdown that began scarcely 12 months later, the long-term results are not pretty. Bad, in fact, if you didn’t happen to buy and hold Amazon, as it was one of the few Dotcoms that ultimately delivered the goods.
Without Amazon you lagged the market, though you did beat inflation.
And yet the Internet has ended up all around us. It really did change our world.
Thematic investing is hard!
I wouldn’t want to be without exposure to tech stocks, given how everything is up in the air. Better I own the robots than someone else if they’re really coming for my job.
But beware being too human in your over-enthusiasm when it comes to your portfolio.
The game has barely begun and we don’t yet know who will win or lose. The Dotcom crash taught us that, at least.
Have a great weekend!
From Monevator
Does gold improve portfolio returns? – Monevator [Members]
How a mortgage hedges against inflation – Monevator
From the archive-ator: How gold is taxed – Monevator
News
Note: Some links are Google search results – in PC/desktop view click through to read the article. Try privacy/incognito mode to avoid cookies. Consider subscribing to sites you visit a lot.
UK inflation rate falls to lowest level in almost three years – BBC
Energy price cap will drop by 7% from July [to £1,568] – Ofgem
House prices are modestly rising, driven by 17% annual spike in new build values – T.I.M.
Hargreaves Lansdown rejects £4.7bn takeover approach – This Is Money
Judge: Craig Wright forged documents on ‘grand scale’ to support Bitcoin lie – Ars Technica
FCA boss threatens private equity with regulator clampdown – CityAM
Sunak says it’s 4th July, in the rain, against a subversive soundtrack [Iconic]– YouTube
Sir Jim Ratcliffe scolds Tories over handling of economy and immigration after Brexit – Sky
No, it’s not all the Tories’ fault… but Sunak and Hunt were too little, too late – Bloomberg
Products and services
Pay attention to catches as well as carrots when switching bank accounts – Guardian
Which energy firm offers the cheapest way to get a heat pump? – T.I.M.
How to get the most from second-hand charity shops – Which
Get £200 cashback with an Interactive Investor SIPP. New customers only. Minimum £15,000 account size. Terms apply – Interactive Investor
Nine out of ten savings accounts now beat inflation – This Is Money
Problems when transferring a cash ISA – Be Clever With Your Cash
Nationwide launches a trio of member deals worth up to £300 – Which
Transfer your ISA to InvestEngine by 31 May and you could get up to £2,500 as a cashback bonus (T&Cs apply. Capital at risk) – InvestEngine
Seven sneaky clauses in estate agent contracts that can cost you dear – This Is Money
Halifax Reward multiple account hack: worth up to £360 a year – Be Clever With Your Cash
Hidden homes in England and Wales for sale, in pictures – Guardian
Comment and opinion
No, the stock market is not rigged against the little guy – A.W.O.C.S.
The life hedge… – We’re Gonna Get Those Bastards
…is easier said than implemented [US, nerdy] – Random Roger
Checking out a fake Ray Dalio Instagram investing scam – Sherwood
An open letter to Vanguard’s new CEO – Echo Beach
If you look past the headlines, London is charging ahead – CityAM
Most of us have too much in bonds [Search result] – FT
Why we still believe in gold – Unherd
Are ‘fallen angel’ high-yield bonds the last free lunch in investing? – Morningstar
For love or money – Humble Dollar
Naughty corner: Active antics
Fund manager warns putting £20k in the US now will [possibly!] lose you almost £8k – Trustnet
A deep dive into US inflation, interest rates, and the US economy – Calafia Beach Pundit
A tool for testing investor confidence – Behavioural Investment
When to use covered call options – Fortunes & Frictions
Valuing Close Brothers after the dividend suspension – UK Dividend Stocks
Meme stock mania has entered its postmodern phase [I’m editorialising!] – Sherwood
Kindle book bargains
Bust?: Saving the Economy, Democracy, and Our Sanity by Robert Peston – £0.99 on Kindle
Number Go Up by Zeke Faux – £0.99 on Kindle
How to Own the World by Andrew Craig – £0.99 on Kindle
The Great Post Office Scandal by Nick Wallis – £0.99 on Kindle
Environmental factors
Taking the temperature of your green portfolio [Search result] – FT
The Himalayan village forced to relocate – BBC
‘Never-ending’ UK rain made 10 times more likely by climate crisis, study says – Guardian
So long triploids, hello creamy oysters – Hakai
Robot overlord roundup
We’ll need a universal basic income: AI ‘godfather’ – BBC
Google’s AI search results are already getting ads – The Verge
AI engineer pay hits $300,000 in the US – Sherwood
With the ScarJo rift, OpenAI just gave the entire game away – The Atlantic [h/t Abnormal Returns]
Perspective mini-special
How much is a memory worth? – Mike Troxell
We are all surrounded by immense wealth – Raptitude
How to blow up your portfolio in six minutes – A Teachable Moment
My death odyssey – Humble Dollar
Off our beat
The ultimate life coach – Mr Money Mustache
How to cultivate taste in the age of algorithms – Behavioural Scientist
Trump scams the people who trust him – Slow Boring
Buying London is grotesque TV, but it reflects the capital’s property market – Guardian
The algorithmic radicalisation of Taylor Swift – The Atlantic via MSN
And finally…
“Three simple rules – pay less, diversify more and be contrarian – will serve almost everyone well.”
– John Kay, The Long and the Short of It
Like these links? Subscribe to get them every Friday. Note this article includes affiliate links, such as from Amazon and Interactive Investor.







Reasoning models are dead. Long live reasoning models!:
https://open.substack.com/pub/artificialintelligencemadesimple/p/reasoning-models-are-a-dead-end-breakdowns
Shorter TL:DR to the TL:DR: it’s the wrong architecture, similar to Marcus’s disparaging of the pre-trained, backward propagation, weight adjustment, deep layered, token shape prediction paradigm and advocacy of hybridised neural nets and formal symbolic language overlay / syncretic (deterministic programme like and probabilistic input/output net nodes) approaches.
The longer TL:DR is from the actual TL:DR: “Reasoning models are a dead end because they try to compress a dynamic control process into static weights. Reasoning is not a pattern you can train; it is an algorithm you must run. When you train on reasoning traces, you only capture the final surviving path”.
My views, we need a better model reward function and ability to learn continuously to update a realistic world model.
Very bullish on TSMC. Massive upside to increased value chain capture on Nvidia chips:
https://open.substack.com/pub/shanakaanslemperera/p/tsmc-the-10-trillion-invisible-toll
But only today, I read elsewhere of Xi’s invasion/blockade plans….
Honoured by the link to this thread in the Monevator Weekend Reading Robot Overlord roundup 🙂 Thank you @TI.
The financing doom loop (and ‘dark fibre’ parallels: “the fiber optic buildout of 1999 where $500 billion of infrastructure investment produced 2.7 percent utilization and 12.8 percent default rates”):
https://open.substack.com/pub/shanakaanslemperera/p/the-stargate-deception
‘And yet it moves’: only today I got a LRM over half a dozen iterations to produce a 100% correct 6,000 word, original advice on a novel issue in less than a day including iteration, checking, polishing up. Realistically that would take 3 days from scratch without any assistance from ‘AI’ (or whatever it should be called). The hyperscalers probably won’t raise the $3 tn to $8 tn needed for the data centre buildout by 2028 to 2030 nor will OpenAI (or anyone get to 250 Gigawatts data centre electricity use) by 2032 (equal to India’s entire electricity consumption). But that doesn’t mean it wouldn’t be desirable to try go get there given what the technology has already demonstrated it can do. It might be merely mundane utility and in no way AGI itself or even just comparable to meaningful notions of AGI but I can easily see it replacing 50% or more of White Collar / Knowledge economy jobs eventually (maybe sooner rather than later) saving tens of trillions of dollars annually in payroll costs and feeding through directly to the bottom line in the P&L.
Sorry about the defective grammar in that one. The 10 minute edit function failed completely to appear upon posting. Omit the rogue parenthesis between “OpenAI” and “or anyone..” and between “use” and “by 2032”.
Crikey, this is a bit wild as a theory of how to get to AGI:
https://open.substack.com/pub/josecrespo/p/the-math-openai-doesnt-want-you-to
It might, or might not, be b*ts**t, but I do definitely agree with this bit:
“You cannot build AGI if you cannot see what your model is doing. You cannot deploy industrial AI if you cannot audit its reasoning. You cannot trust a system that cannot demonstrate coherence.”
Forward P/Es of the Mag’ 7 today:
https://substack.com/@dividendtalks/note/c-194489846?r=2kxl2k
Spot the odd one out there 😉
Musk is no Tony Stark IMHO, but he is a master salesman for his own shape shifting narrative – progressive promoting clean energy and BEVs, then FSD, then Robo Taxis, then a MAGA conservative pushing Optimus robotics, then SpaceX (which is maybe floating either later this year or next for an implied $1.5 tn market cap at a 68x forward sales, and ~100x trailing revenue, valuation), and now xAI.
Then again, Palantir’s on a more than 100x forward sales valuation. It could fall by nearly 60% to $74 and still be no less than the present Price/Sales ratio of Tesla, although, unlike TSLA, PLTR isn’t (yet) a Mag’ 7 stock.
Jevons’ Paradox on turbo boost.
Why ‘AI’ (and by extrapolation and extension AGI) will require more workers (and create more jobs than it destroys) even if (eventually) 5 white collar workers end up being able to do the work of 50 in the pre-LLM era (<2023):
https://open.substack.com/pub/ruben/p/replaced
Intriguing, and at least faintly plausible; but, to begin with, the job losses must surely first occur as the quickest way for shareholder value orientated businesses to boost the bottom line.
In the Industrial Revolution eventually more jobs were created and wealth ultimately cascade.
However, initially, huge numbers of people in crafts were put out of work (the Luddite movement was right in this respect), and people working in factories were paid far worse, and had even more appalling working and living conditions, than even the essentially peasant conditions which they or their parents had lived under before the onset of urbanisation.
The immediate historical precedent is not good.
Top ten AI stories of 2025:
https://open.substack.com/pub/generatives/p/10-ai-stories-that-shaped-2025
Personally, I’d have put DeepSeek higher up than number ten but there’s no question bubble fears and infrastructure spend and bottlenecks should be on the winner’s rostrum of the highest priority pieces.
Arguably, the continued improvements to LRMs/ inference from DeepSeek R1, through Grok 3, Gemini 2.5 Pro, Claude 4 Sonnet, on to GPT-5, then Claude 4.5 Sonnet, GPT-5.1, Gemini 3 Pro, Claude 4.5 Opus, to GPT-5.2 is the biggest development of all of the last year, given the scaling wall.
All these IPOs do seem a bit 1999ish.
https://open.substack.com/pub/aidisruption/p/mega-ipos-flood-2024-investors-cash
Too much supply coming on for the demand for new AI and frontier tech (SpaceX etc) shares? Time as always will tell.
The benefits of Prompt Engineering:
https://open.substack.com/pub/tylerfolkman/p/i-automated-my-own-ai-prompts-heres
A look ahead for the semis and revised hyperscaler Capex in the years ahead:
https://open.substack.com/pub/techfund/p/ai-and-semis-outlook-2026
This mentions token usage / demand growth coming in much stronger in 2025 than expected beforehand; but the sheer scale of growth in use and decline in prices is not covered but deserves another mention here: one model provider (can’t remember now if it’s OpenAI or Google for ChatGPT or Gemini respectively) has seen monthly token usage go from 9.7 trillion to over 1.3 quadrillion (1,300 trillion) (every thirty days!!), with the price per million tokens down 99.7%, all in 18 months (IIRC mid April 2024 to mid October 2025, I’d need to check though, this is from my memory). One token = 0.7 to 0.8 words.
A good all round interview today on all aspects of AI:
https://open.substack.com/pub/sophiecapital/p/inevitability-weekly-5
TPUs/Google, Apple and low end LLM apps, prospects for eventual ASI etc.
A engaging take on how AI will, over the next 50 years or so lever the top percentile of workers/ entrepreneurs and businesses:
https://open.substack.com/pub/generativevalue/p/2025-annual-letter
The more I read + learn the more convinced I am that the optimum portfolio is a modestly leveraged barbell with a lower risk underpinning of:
– some low volatility, high moat, low disruption risk surface and intersection of quality + value stocks with a degree of ‘inflation protection’ (especially consumer staples and infrastructure/ utilities);
– plenty of ‘risk off’ assets (gold and precious metals, long duration TIPS, global macro HF strategies);
– some cyclical broad commodity and deep value energy producer and junior miners (for dividends and optionality respectively) and opportunistic exposure to deeply discounted HY REITs and other CEFs/ITs;
– With a bleeding edge overlay, at the other end of the risk / opportunity barbell, of high growth, tech and tech disruption stocks from the mega caps right down the cap weight scale to (at the app/platform end) small/ micro/ nano cap (including fintech like operators);
– Plus some sort of juicing with a low single digit allocation of starting capital to a DCA leverage equity rotation strategy (which, if the worse happens, and it goes to zero, then still only burns through a few percent of the starting value of the portfolio), contributing less to the DCA when valuations are high and more in the recovery (e.g. back above 200 DSMA) after a crash (e.g. more than 20% drawdown on 52 week high).
AI IPO bonanza:
https://open.substack.com/pub/aisupremacy/p/ai-in-2025-recap-the-year-the-old-rules-ai-trends
Money left of the table, where’s the value left at now in the AI stack?:
https://open.substack.com/pub/randomwalkwithdata/p/assets-or-software-and-at-what-price
Depreciation coming (nice Capex chart):
https://substack.com/@therealrandomwalk/note/c-195510981?r=2kxl2k
Kernel optimisation:
https://open.substack.com/pub/importai/p/import-ai-439-ai-kernels-decentralized
Intriguing: “The most important takeaway is that decentralized training is growing quickly relative to frontier AI training, with decentralized training runs growing their compute by 20X a year versus 5X a year for frontier training runs. But the other important takeaway is that the sizes of these things are completely different – today’s decentralized training runs are still about 1000X smaller than frontier ones.”
$24 tn priced in, $11 tn left on table assuming 20% margin on $8 tn (to 2030?) cumulative Capex and a 22x PE:
https://substack.com/@therealrandomwalk/note/c-193161856?r=2kxl2k
So true.
https://substack.com/@mjreard/note/c-185945397?r=2kxl2k
Dwarkesh is my Go To AI Guru.
@Delta Hedge — The new Michael Cembalest / JP Morgan outlook for 2026 is here. Quite a bit of discussion about AI re: the market implications:
https://assets.jpmprivatebank.com/content/dam/jpm-pb-aem/global/en/documents/eotm/smothering-heights.pdf
Thanks for the link there @TI. Fantastic piece of research by JPM.
So we’re looking at the New Deal Public Works Administration, the Manhattan Project, the electrification of industry, the Interstate Highway System, the Apollo programme and the Broadband rollout all rolled into one (p.4)!
What could possibly go wrong?
I was struck by two points on p.9.
First the 1.5x-1.75x cost improvement and 1.25x-1.5x speed improvement of current frontier model assistance to existing human task experts.
Obviously, this seems cautious (and necessarily subjective, even though precisely quantified here) in itself, especially given that earlier models, like GPT-4o, are shown here as actually slowing down experts to half unassisted speed and doubling the task cost. That seems a very harsh assessment of earlier models’ IRL performance.
But what really strikes me here is the disconnect between the chart and reality.
The chart suggests to me organisations could shead 30% of their staff costs now for the same output, or, if the market they’re selling into has capacity, that they could increase output now by 50% for the same costs base.
Yet when I look left and right I see no sign of mass redundancy and no productivity boom.
It’s like what Alan Greenspan said about the Internet in the mid 1990s (1996 perhaps?), namely that the impact was showing up everywhere except in the productivity and growth data (and, of course, a lot of the corporate profits of that era turned out to be baloney, as we now know).
So what the heck is happening???
We’ve got a demonstrably very capable technology.
Aside from motors and electricity this is perhaps the most capable (and in many ways it is *the* most impressive) tech that I’ve ever come across.
And yet I see near zero sign of it (yet) changing businesses truly fundamentally, or in many cases even very much.
Are people just using this to work less to output the same with the same numbers and mix of staff and, therefore, with the same cost base?
Will business act like Tito’s Worker Cooperatives and basically let staff run the show for their benefit, or will shareholders at some point demand their pound of cost cutting flesh out of payroll?
Maybe it’ll all just take a long time.
Organisational and institutional culture bottlenecks are in their way as significant as energy, data and financing availability to the success of this endeavour.
The second point is on the question of data bottlenecks.
The footnotes on p. 9 reference a total of 4,750 tn tokens of data on the Internet, on video and in image libraries. That sounds loads but in the context of LLMs is it really?
Frontier models are (already) using up to 10exp26 (100 septillion) effective Floating Operations in training and seeing demand of over 1,300 tn tokens per month.
Accordingly, it seems quite blase for JPM to note that those 4,750 tn tokens available for training data (3,100 tn + 300 tn + 1,350 tn) will be enough for training until 2030.
Dave Friedman dissects Ed’s 19,000 word Ensh*tification of AI piece (previously linked to):
https://open.substack.com/pub/davefriedman/p/ai-capex-built-on-options-priced
Zvi’s on the case for that Philip Trammel and Dwarkesh Patel piece on ‘Capital in the 22nd century’ which I previously linked to and which was also separately linked to by @platformer in the most recent W/e reading MV links:
https://open.substack.com/pub/thezvi/p/dos-capita
This isn’t good for AI power consumption needs – the northern Virginia data centre electricity crunch:
https://open.substack.com/pub/privatemarketsnews/p/the-infrastructure-bottleneck-nobody
Or perhaps hydrogen fluoride is the ‘real’ AI bottleneck (for making silicon tetrafluoride for chip wafers):
https://open.substack.com/pub/shanakaanslemperera/p/the-invisible-chokepoint
Scaling walls, energy connection backlogs, power generation shortfalls, data availability insufficiency, financing issues, revenue shortfalls and operating losses. It’s not an obviously happy picture.
AI accounting controversy aplenty, although, TBF, it’s all out in the open and above board. But being technically legit in terms of GAAP, IFRS and the law doesn’t make it a good idea or investable:
https://open.substack.com/pub/shanakaanslemperera/p/the-35-trillion-ai-mirage-the-measurement
The periodic table od AI (a very useful conceptualisation of how it fits together):
https://youtu.be/ESBMgZHzfG0?si=zXyG7D_nSFZb42Y7
“The ratio of committed capital expenditure to current revenue is approximately 107 to 1. For comparison, the most capital-intensive industries in the traditional economy, such as semiconductor fabrication or liquefied natural gas terminals, typically operate with capital expenditure to revenue ratios of 3 to 4. OpenAI is operating at roughly 30 times the capital intensity of industries already considered at the extreme end of infrastructure investment”: Breaking it down:
https://open.substack.com/pub/shanakaanslemperera/p/the-ouroboros-protocol
Nvidia Rubin CPX “a chip that analysts estimate costs roughly 25% as much to manufacture as a standard Rubin R200 while delivering approximately 60% of the compute performance”:
https://open.substack.com/pub/shanakaanslemperera/p/the-architecture-of-dominance-nvidias
Missing the forest for the trees on an AI bubble? This was especially useful PoV on the glaring differences with the 1990s but the one obvious parallel, the last mile to the home (copper wire via dial up) for pre Broadband Internet on the one hand and the electricity bottleneck right now for data centres on the other:
https://youtu.be/Wcv0600V5q4?si=dOWPo9Xf1HCJ2Cjw
I play all my YT consumption at 2x and I suspect this guy sounds more convincing because of it 😉
A nice link to the 14 top AI resources (in the authors opinion) from 2025:
https://open.substack.com/pub/theaiopportunity/p/the-2025-ai-breakthroughs
Sorry these are going to be do briefly introduced today, pressures of work means I don’t have longer, unfortunately:
Software too cheap to meter:
https://open.substack.com/pub/amistrongeryet/p/software-too-cheap-to-meter
Said the same about nuclear power IIRC.
Trouble at mill with Meta and LLMs:
https://open.substack.com/pub/garymarcus/p/breaking-marcus-weighs-in-mostly
Don’t forget thay the semiconductors have to be packaged up to work:
https://open.substack.com/pub/marklapedus/p/issues-challenges-with-glass-substrates
Reinforcement Learning as part of Intelligence As A Service (‘IaaS’):
https://open.substack.com/pub/semianalysis/p/rl-environments-and-rl-for-science
Shifting the bottleneck from “insufficient compute” to “insufficient context”: “Context is the new bottleneck:
https://open.substack.com/pub/fundaai/p/why-dram-and-ssd-could-become-two
Vibe coding with Claude: 39 ‘free’ models to use, apparently…:
https://open.substack.com/pub/aidisruption/p/39-free-models-to-use-with-claude
Nividea Rubin GPU a ‘game changer’???:
https://open.substack.com/pub/aidisruption/p/ces-nvidias-rubin-cuts-ai-inference
But elsewhere I see/read (might have been on YT, can’t recall now) effective FLOP is up ~750x in 2 years but cache memory only up 1.7x. The train can only travel at the speed of the slowest carriage…
Interesting spin off from Google Ventures:
https://open.substack.com/pub/appeconomyinsights/p/how-motive-makes-money
Nice write up from Compounding Quality of a AI themed tailwind stock surfing the memory requirement wave:
https://open.substack.com/pub/qualitystocks/p/stock-of-the-week-micron-riding-the
Erratum: “do” in my intro to #432 should be “so”. Apologies. Typing on move again.
Interesting. Three dimensional wafer/chip stacking:
https://open.substack.com/pub/marklapedus/p/nhanced-expands-hybrid-bonding-capabilities
Surely there’s a thermodynamic/ cooling barrier to this though? Again, the weakest link in the process of delivering AI (i.e. rare earth mining and refining, electricity generation and network connection, memory usage, synthetic training data over reliance resulting in effective model collapse, vendor financing and private lending limits to data centre build out funding, heat dissipation / cooling, organisational and societal resistance to deployment of AI at scale and in depth, push back about removing expensive white collar human roles ‘from the loop’, psychological and training limitations on using models most effectively etc etc) is the all too difficult to bypass bottleneck.
Personally, I think the optimum way forward to attempting AGI is a mix of:
– Massive diversification of approaches (and therefore necessarily disinvestment away from LLMs) to research into a new neuro symbolic hybrid set of approaches.
– Double and triple down on algorithmic improvements over hardware. So much cheaper. The Chinese have this one right. It could be their winning card.
Robots on the move:
https://open.substack.com/pub/robopub/p/new-atlas-robot-heads-to-hyundai
Have we gotten DeepSeek and China completely and totally wrong????? 🙁
https://open.substack.com/pub/shanakaanslemperera/p/56-million-was-the-lie-589-billion
On the face of it bullish for US Picks and Shovels stocks in the data centre stack.
The buzz is shifting from GPUs to memory. SanDisk, Seagate. One share tipping site today: “In February 2025, Western Digital spun off Sandisk. Wall Street yawned. The market was brutal. Sandisk opened at $52.20—then promptly crashed 7% to close at $48.60. That $5.6B valuation at close represented a 65% haircut from the $16B Western Digital actually paid for the company in 2016. Wall Street thought flash memory was a dead commodity. They were wrong. 11 months later, Sandisk is a ~$50B juggernaut. The “easy” money in GPUs has been made. The real money is now in the second-order effects—the bottlenecks that hyperscalers can’t engineer around, memory storage is just one of them.”
Is Applied Materials (AMAT) “an unavoidable “complexity tax” on advanced chip production, or a cyclical capital equipment vendor nearing its peak?” You decide 😉
https://open.substack.com/pub/aryadeniz/p/deep-dive-applied-materials-amat
xAI going from $50 bn 2024 valuation to $200 bn on a $20 bn raise.
https://open.substack.com/pub/aidisruption/p/xai-raises-20b-more
With OpenAI looking to increase its next up round from a $500 bn to a $830 bn valuation this year and a float at $1.5 tn in 2027, SpaceX looking to IPO this year or next for $1.5 tn, Anthropic eying a $350 bn valuation round were headed into uncharted waters for private into public. Saudi Aramco can’t be considered a real comparator here IMO.
Grok over ChatGPT? xAI over OpenAI?
https://open.substack.com/pub/ruben/p/grok-chatgpt
AI at the science frontline and frontier:
https://open.substack.com/pub/sciencewtg/p/americas-genesis-mission-artificial
Not just shortages of General Purpose GPUs and AI ASICs but crisis brewing in CPU supply with TSMC only able to meet 80% of high end demand leading to likely 50% price increases:
https://open.substack.com/pub/fundaai/p/deepintc-agentic-ai-and-supply-bottlenecks
Very true. The advances may come in the boring industries and mundane if impressive tech has longer adoption curves than current expectations credit:
https://substack.com/@chocolatemilkcultleader/note/c-196658825?r=2kxl2k
Low down on Vera Rubin GPU arriving in the fall this year (45 degree centigrade warm water cooling 130 kwh per rack):
https://open.substack.com/pub/datacenterrichness/p/vera-rubin-enters-production-what
AI tools worth paying for?:
https://open.substack.com/pub/artificialintelligencemadesimple/p/the-ai-tools-im-actually-paying-for
In the author of the piece’s opinion ChatGPT and Claude trump Gemini on the top tier ($/£200 pcm)
This analysis resonates:
State of World in the era of ML and MAGA: “[De]Globalism, resource nationalism, remonetisation of silver, Gold in China, the horseshoe is real, and the need for a New New Deal as the machines drive inequality.”
AI Bulls and Bears: “The bulls are right about 2035. We’ll be 8-50x short on compute. Tokens will be the kWh of knowledge work. Current capex will look prescient. But the bears may be right about today, where our ballpark is we are investing ~12x what the companies are making.”
Capex utilisation: “Phase 1 (2025-2027): Oversupply. Build faster than demand. Utilisation collapses. Economic losses mount. This is now.
Phase 2 (2028-2030+): Thresholds cross. Demand explodes. Hit compute ceiling. This is what the bulls are modeling.
Both are true. They happen in sequence, not simultaneously.”
From:
https://www.campbellramble.ai/p/26-views-for-2026
Why isn’t AI taking all the (white collar jobs (already)?:
https://open.substack.com/pub/randomwalkwithdata/p/if-ai-is-taking-jobs-then-where-are
Answer perhaps: aging in place. Maybe we need more RE in the FIRE?
Everything on the new release of Claude 2.1.0:
https://open.substack.com/pub/thezvi/p/ai-150-while-claude-codes
When I read these model reviews I can’t help think, “where’s the moat?” Durable competitive advantage is the name of the game. No moat. No money.
SK Hynix, Micron and Samsung highlighted on this one as (mixing metaphors again) the ‘gatekeepers’ to solving the data centre storage ‘bottleneck’:
https://open.substack.com/pub/uncoveralpha/p/2026-ai-landscape-who-benefits-the
SK Hynix tipped by Woodford last year since when it’s gone from ~200,000 South Korean Won a share to ~750,000, on a 12x TTM P/E today, compared to just 6x a year ago. Say what you like about Neil (and no doubt WPCT/ SUPP/ INOV has to be one of the biggest % disasters in Investment Trust history) he still called this one right.
Alphabet overturns both Cinderella and prima donna OpenAI and once and has been innovator Apple:
https://open.substack.com/pub/aidisruption/p/ais-ultimate-cinderella-dethroning
Wouldn’t it be great, right after OpenAI floats, to have an ETF that was 50/50 long Alphabet and short OpenAI and long an equal mix basket of Volkswagen and Uber and short Tesla?
Still, Tesla has been the latter day equivalent of the short Japanese Gov Bonds widow maker trade, which didn’t work for well over 30 years (until it suddenly did in the last year or two).
The Apple/TSMC partnership (a deep dive):
https://open.substack.com/pub/semianalysis/p/apple-tsmc-the-partnership-that-built
The TSMC fundamentals over the last several years are amazing.
Very bearish of TSLA/FSD/Optimus:
https://open.substack.com/pub/neuralfoundry/p/teslas-robotaxioptimus-dreams-are
But since when has TSLA not traded as a narrative asset? Narrative momentum has been the TSLA playbook from day one. Take the point on Weymo LiDAR reliability being on a different level to anything Tesla fields. Of course, in the deep dive on Tesla in Tom Pueyo’s Uncharted Territories, he takes the opposite view.
It’s all about context window memory: storage stocks melt up:
https://open.substack.com/pub/amritaroy/p/memory-and-storage-stocks-are-melting
The social, economic and environmental cost of data centres:
https://youtu.be/NbOmVwT22i8?si=wJhoUzZE8pj90Bmo
Astonishing claims on I/O speed up and power requirements from fractal, distributed computing.
Is this a steaming load of BS or something worth investigating further???:
https://open.substack.com/pub/fractalcomputing/p/if-data-is-the-new-oil-what-if-prices
My suspicion antenna are screaming red alert, but it beggars belief there’s not more info on this given the magnitude of the claims made. Why are OpenAI et al not all over this? They’re invested in survival and winning, not propping up a failed data centre paradigm of bigger and more expensive is better. I’m struggling to see how *everyone* but the fractal computing people would be incentivised to downplay or ignore this. Maybe my framework (sociological motivation for sidelining) for looking at this question is wrong.
DeepSeek drop paper on R2 v R1 (in expanded R1 paper, from 22 to 86 pages of details):
https://open.substack.com/pub/aidisruption/p/deepseek-drops-full-r1-tech-report
32x cheaper per token in about a year.
AI = energy:
https://substack.com/@denisgorbunov/note/c-197401471?r=2kxl2k
Claude 2.1.0 ‘new’ agentic features:
https://open.substack.com/pub/aidisruption/p/claude-code-210-official-release
But Claude 4.5 and Claude Code apparently not to shabby either:
https://open.substack.com/pub/thezvi/p/claude-codes
One blogger on joining OpenAI:
https://open.substack.com/pub/generatives/p/on-joining-openai
Masayoshi Son, SoftBank, Switch and financing the AI boom:
https://open.substack.com/pub/netinterest/p/financing-the-ai-boom-2
“all the big software companies are [capital intensive] hardware companies now”
“in 2017, AI wasn’t LLMs. AI was artificial general intelligence (AGI). I think people didn’t think of LLMs as being AI back then. I mean, I grew up on science fiction books, and they predict a lot, but none of them pictured “AI” as something like a search-intensive chatbot”
“The secret to Google search was always how cheap it was, so that informational searches that were not monetizable (and make up 80% or more) did not pile up as losses for the company. I think this is the fundamental problem with generative AI and LLMs today—they are so expensive. It is hard to understand what the profit model is”
Michael Burry with Dwarkesh Patel on how the AI revolution has, and has not, lived up to expectations:
https://open.substack.com/pub/post/p/the-ai-revolution-is-here-will-the
“How long can Nvidia’s “insane demand” for GPUs last? And what happens to the company when the peak cash cow of their only viable revenue source is over? Nobody has given a good explanation. The semiconductor industry is cyclical by nature, Nvidia’s market cap rise has lifted the entire Semiconductor sector with it, but it’s boosted mainly on myths of scaling”:
https://open.substack.com/pub/futuresin/p/nvidias-2026-updates
Azeem on work after work and living in world of AI enabled automation:
https://open.substack.com/pub/exponentialview/p/artisan-premium-making-in-2026
Ohio gets it right? 6 bn cubic ft per day of new natural gas. Cheap electricity. Attract data centers. New construction jobs:
https://open.substack.com/pub/doomberg/p/intelligent-design
More critiques of the Philip Trammell and Dwarkesh Patel Capital in the 22nd Century ‘thesis’:
https://open.substack.com/pub/pricetheory/p/ai-labor-share
“AI 2027” now AGI 2034?? Some detailed and wild extrapolation here:
https://open.substack.com/pub/aifutures1/p/ai-futures-model-dec-2025-update
Playing around with their assumptions is yet wilder still:
https://www.aifuturesmodel.com/
The map is not the landscape
It (the AI Futures Model) is just so nuts here that it’s actually worth quoting Seth Llyod’s widely cited 1999 MIT paper on “The Ultimate Physical Limits of Computation”:
https://arxiv.org/abs/quant-ph/9908043
“A kilogram of ordinary matter holds on the order of 10exp25 nuclei. If a substantial fraction of these nuclei can be made to register a bit, then one can get quite close to the ultimate physical limit of memory without having to resort to thermo nuclear explosions. If, in addition, one uses the natural electromagnetic interactions between nuclei and electrons in the matter to perform logical operations, one is limited to a rate of approximately 10exp15 operations per bit per second, yielding an overall information processing rate of ≈ 10exp40 operations per second in ordinary matter. Although less than the ≈ 10exp51 operations per second in the ultimate laptop, the maximum information processing rate in ‘ordinary matter’ is still quite respectable.”
“The ‘ultimate laptop’ is a computer with a mass of one kilogram and a volume of one liter, operating at the fundamental limits of speed and memory capacity fixed by physics. The ultimate laptop performs 2mc2/π¯h = 5.4258 × 10exp50 logical operations per second on ≈ 10exp31 bits. Although its computational machinery is in fact in a highly specified physical
state with zero entropy, while it performs a computation that uses all its resources of energy and memory space it appears to an outside observer to be in a thermal state at ≈ 10exp9 degrees Kelvin. The ultimate laptop looks like a small piece of the Big Bang.”
Somehow, even with AI assisted accelerating progress (which, with respect to AI Futures, is one heck of a big assumption to make here) I can’t see us getting anywhere even close to just the room temperature version of the ‘ultimate’ computer, at ~10exp40 FLOPS/CPS per kg / litre. If so, this then, in turn, surely rules out the AI Futures main projection on grounds of practical if not physical credibility (AFAICT).
Worth emphasising that (per the footnotes to the linked AI Futures’ Substack post at #475) back in 2023, not long after when GPT3 dropped, the median AI ‘expert’ prediction for AGI (assuming it happens) was either 2047 or 2116 (2047 for “unaided machines outperforming humans in every possible task”, and 2116 for “all human occupations becoming fully automata).
I give Francis Galton’s wisdom of crowds (here a market in expert projections) moderately higher credence as a prior than AI Futures appears to.
Some ordinarily, and all too plausibly, bad AI futures (no apocalypse/ no economic collapse, just sucks in a regular, if still profoundly depressing, way; think idiotocracy):
https://open.substack.com/pub/bloodinthemachine/p/four-bad-ai-futures-arrived-this
Nvidia v AMD = Apple iOS v Android?
https://open.substack.com/pub/techfund/p/nvidia-vs-amd-apple-vs-android
Note parameter count growing at a trend of 10x p.a., test time scaling reasoning token use increasing by 5x p.a., and Rubin will give 5x the performance of Blackwell, despite the number of transistors only increasing by 60%.
There’s nothing new under the sun whether (as Jesse Livermore, writing as Edwin Lefèvre, observed in his autobographical ‘Reminiscences of a Stock Operator’) in the arena of investing or, for that matter, in computer science and AI.
Just as the Panic of 1907 and the Great Crash of 1929 share many features, will we look back upon the later 2020s as being a 1974 like (first) outbreak of AI winter moment?:
https://youtu.be/hYnadoy8aQE?si=IWMKQk8zRPH4YGad
The first explicitly articulated AI bubble ran from the coining of the term as a grant research marketing tool at the Dartmouth conference of 1956; but it took until long after the General Problem Solver of 1957 and the Perceptron of 1958 failed (both already obviously so by the time that Eliza debuted in 1964, in a misfired attempt to show the shallowness of the field) for the UK to finally pull the funding plug with the Lillil report in 1973 and then DAPRA to do likewise in the US the following year.
How long might it take this time around? If the past is a prelude (which it rarely is), and taking the first LLMs as 2017 and comparable, in relative importance, to say the Perceptron of 1958; then we’d be looking at 2032/33 for things to fall apart.
But the cycle of hype and the vortex of VC and Private Lending funding is much greater now than then. That suggests to me an acceleration of developments and cycles now relative to the field of ‘AI’ in the 1960s.
A reminder from a year ago of why AGI isn’t happening anytime soon:
https://youtu.be/By00CdIBqmo?si=otAmZIwPlfYzzqsj
But, as the previously posted YT piece ends by noting, by 1980 an AI spring had begun (after the onset of AI winter in 1973/74), just as the crash of dot.com in 2000-2002 led, from the early 2010s, to a spring and ultimately long summer for what became FAANG, the Magnificent Seven and the Hyperscalers.
This is the US orientated bull case for LLMs *already* creating a productivity boom extracted from the start of one of the many (many) investing related emails which I get now each day (I can’t share a link as there’s no associated Substack etc to it to share):
“Productivity is eating inflation for lunch
“Experts” told us what would happen. All the tariffs, deportations, pressure for lower interest rates, pointed to a guaranteed crash. They were wrong…Trade Deficit: Dropped to $29.4B in October, down from $136B in March. That is the lowest level since 2009. GDP Growth: The Atlanta Fed projects 5.1% for Q4. The historical average since 1947 is only 3.2%. We are growing nearly twice as fast as “normal.” Inflation: It isn’t spiking. It is dropping. Down to 1.9% (Truflation). How does that happen? Productivity. It’s up nearly 5%. Companies are figuring it out. They’re using AI. They’re navigating deregulation. They are producing more profit with fewer people.”
I have to say, and against my sceptical (cynical?), miserly curmudgeonly instincts, this is (I reluctantly concede) actually quite plausible. I’m using enterprise Copilot at work. Copilot is not exactly highly regarded by premium paid tiers LLM ‘super users’. But I’m impressed nonetheless. Surprisingly impressed.
On appropriate tasks (some of which are very highly complex and novel, multi layered questions), and with my guiding, shaping, evaluating and amending (where needed) outputs; I reckon (even with all that human domain ‘expert’ input) that, overall, I’m still getting at least (a conservatively estimated) composite 2x to 3x speed up (i.e. 3 days’ of work compression into 1; although, sadly, this just leaves more time for bureaucracy to fill the vacuum productivity creates).
And, given the plausibility from direct product experience (not the free chatbots, but the professional enterprise suites); can we really say that the firms behind the hiring freezes are just using AI as an excuse (and that it’s really a recession just around the corner being anticipated by forward look through, almost prescient, HR teams in those companies?)
Isn’t it credible (and parsimonious) to suggest that top, paid up tiers, frontier models, which were already scoring 126 to 148 points (most commonly 130, the threshold of very superior into genius) on various recognised, respected and well established IQ tests (and going up by 2.5 points a month) some months ago, might (increasingly) be displacing the need to take on inexperienced junior staff?
Doesn’t seem at all far fetched to me, even as the econ bloggers on Substack grasp at every explanation under the sun:
https://open.substack.com/pub/apricitas/p/the-no-hire-economy
On “What Happens When Superhuman AIs Compete for Control?” 🙁
https://open.substack.com/pub/aifutures1/p/what-happens-when-superhuman-ais
It’s either 6,000 words of ‘sci fi’ ‘faction’ presented both as a warning scenario and as an aid to prudent risk management,… or we’ve had it….
I can’t see how 2027 is still on the table as a serious timeline for AGI (or even ‘just’ for the timeline for fully Automated Coders) given the evident slow down in the rates of improvement of GPT4 to 4.5, GPT4.5 to 5, GPT5 to 5.1, GPT5.1 to 5.2 and from Gemini 2.5 to 3; as compared to the rate and the overall magnitude of incremental improvements in utility from going from GPT3 to GPT4.
This guy’s an option trader but even he’s seeing a massive impact now from ML/ LRMs/ LLMs in his work:
https://open.substack.com/pub/moontower/p/work-is-going-to-feel-very-different
You can’t just dismiss this phenomenon out of hand.
It can be a bubble (of one or more sorts) and still be transformative.
Many things can be true at once.
Building moats at the AI application layer:
https://open.substack.com/pub/artificialintelligencemadesimple/p/how-openai-builds-amazing-products
New 6,000 word breakdown of the whole AI stack linked to here:
https://substack.com/@scstrategist/note/c-198399762?r=2kxl2k
Last quarter hyperscaler Capex accelerated to $142 bn, over 3 months….! Think of that 91 days, $142 bn for one type of fixed investment by one small group of massive companies. How long before data centres are bigger than defense in the US (although, that said, DJT now claims to want to boost ‘Department of War’ expenditure from $950 bn this year to $1.5 tn the next – not be taken seriously I hope).
To quote the pitch for attention in the preface note: “every bottleneck. Power generation. Thermal management. Electrical infrastructure. Connectivity. Physical safety”
Despite incremental increasing utility, core failure modes (most notably non specificity of output to instructions, of which hallucinations were the most obvious early examples) still seem to be ‘baked in’ to the current diffusion and transformer architectures, even after nearly a decade:
https://youtu.be/bv19nXfb0bc?si=OjDUYSY5IdLaoZpC
Not promising for AGI.
View from Richard Murphy on the Left:
https://youtu.be/68iTH6mX-0s?si=NI0T6F50ZXMty6ze
Agree that financing arrangements lead to shadow bank risk and consequential contagion risk.
Also agree with him on job destruction.
The inflation effect, however, is exceedingly unclear.
Disinflationary. Reinflationary. Who can tell? I can’t. I don’t think Murphy can either, though he doesn’t realise it.
On the face if it, where ML / LRMs / automation substitutes (which various degrees of imperfection) for (cognitive) labour then it would seem disinflationary/ deflationary.
On the other hand, it is true that the cost of chips and energy will probably go up, and that’s prima facie inflationary.
Not sure that there’s any evidence in the UK though that data centres are going to affect water demand or pricing.
Might well be different for a data centre in Arizona of course.
Any higher productivity from ML etc could, depending on the context and scenario, be either disinflationary or reinflationary (or cancel out neutral).
Again, no one can know.
Agree with Murphy that the BoE will probably call it all wrong and that,as always, politicians are clueless/asleep at the wheel on this.
As you would expect a precautionary perspective in this one.
Finally, saying the silent part out loud:
https://open.substack.com/pub/aisupremacy/p/generative-ai-might-be-hurting-the-labor-market-future-of-jobs
Yes, it is, I think, likely that LLM/LRMs, and wider neural net ML applications, are now starting to cause measurable, and indeed significant, permanent job losses.
Kinda obvious that it probably would, sooner or later; and so it seems likely that it is now actually showing up.
To paraphrase and mangle misquote Sherlock Holmes, when you’ve eliminated everything else what remains is truth.
If this is indeed ‘AI’ related job losses (especially given the strong GDP prints State side), then the issue becomes not if but how far, and for how long, does this go on for???
Are we about to be entering a world transformed (and not necessarily for the better) job wise???
Surface success versus deep realties and sfe versus innovative in China’s AI execution phase:
https://open.substack.com/pub/hellochinatech/p/china-ai-fast-follower-trap
Although the Chinese seem to be doing pretty damm well on the innovation front to me.
They don’t exactly feel like Kodak in 1975, developing, but not then going on and commercially pursuing, a digital camera:
https://open.substack.com/pub/robopub/p/world-no-1-chinese-firm-open-sources
And per my #490-491 comments above, the US job situation is probably worse than feared:
https://open.substack.com/pub/shanakaanslemperera/p/the-phantom-jobs-thesis-americas
The AI blizzard is coming in thick and fast to my investment related inbox. I’ll quick fire the next nine in below in the interests of economy.
Here’s the first, starting with Mr Musk’s very bold claims:
https://open.substack.com/pub/aidisruption/p/musks-3-hour-bombshell-interview
Evolve AI agents by getting them to compete:
https://open.substack.com/pub/importai/p/import-ai-440-red-queen-ai-ai-regulating
“A one-shot (AI) warrior defeats 1.7% of human warriors. Best-of-N sampling produces a set of warriors that can defeat 22.1% of human warriors Evolutionary optimisation against each human warrior generates a specialised warrior for every opponent; this set can collectively defeat 89.1% of human warriors and defeat or tie 96.3%.”
For powering data centres China’s a hundred years ahead of the US, apparently…
https://open.substack.com/pub/exponentialview/p/data-to-start-your-week-26-01-12
How one super user uses Claude:
https://open.substack.com/pub/aidisruption/p/my-2025-claude-code-mantra-simplify
Keep it simple.
AI capex is not slowing down…
https://open.substack.com/pub/crackthemarket/p/the-crack-the-market-signal-1
We’re on motorcycle and either about to career off a cliff, or head up a jump ramp to leap over half a dozen monster trucks!
As you might expect, Gary thinks it’s the cliff and not the jump ramp ahead:
https://open.substack.com/pub/garymarcus/p/lets-be-honest-generative-ai-isnt
I’m personally really sceptical of these surveys which so ‘no’ productivity boost. Have they actually*used* this tech? What are they measuring? And how? Doesn’t ring true with IRL.
Software AI was just the warm up act:
https://open.substack.com/pub/theaiopportunity/p/robotics-will-be-the-next-decades
Chiplets, smart rings, HBM4, silicon photonics, IEDM papers:
https://open.substack.com/pub/marklapedus/p/the-latest-news-in-ic-packaging-and-ffb
The Rigetti twelve by nine qubit quantum computer is not necessarily a massive breakthrough. If all nine qubits are fully error corrected logical qubits, then that’s a maximum of a 512 fold (i.e. 2exp9) speed up from quantum parellism per operation per qubit over a classical logic gate.
On chiplets, Cadence scores very highly on quality growth metrics.
And, lastly, this from the aforesaid commentary (from Money Machine Newsletter) on the effect of AI on jobs and productivity:
“There’s an old habit in the investing world. It goes like this…If companies aren’t hiring, the economy must be crashing. It’s time to break that habit. Everyone spent the last year panicking about a “cooling labor market.” They looked at the slowing hiring rates and flattened work hours and predicted the worst. They were wrong. While mainstream media is obsessing over headcount, real GDP accelerated to a 4.3% annualized pace. Productivity surged at nearly 5%. This is what efficiency looks like. We are seeing an economy that generates more output with fewer workers. That isn’t a recession signal. That is the holy grail of business. It’s a productivity-driven expansion. For corporate America, this is the perfect setup: Unit labor costs go down. Inflation pressure eases. Profit margins expand. Companies are realizing they can grow earnings without aggressive hiring or raising prices. They are doing more with what they have. That’s why earnings and revenues are hitting records even while payroll gains moderate.”
And on the current viability of the picks, shovels and power producers as investment themes:
“AI’s big checks just cleared, expect more zeros in the market. Microsoft spent ~$9B on IREN—a stock we called out early in the year, when it was on no one’s radar.
Amazon tossed in ~$5B for CIFR. Yesterday, these were bitcoin miners. Today, they’re AI power plants. AI’s no longer about compute—it’s about capacity.
Each new data center sucks down enough juice for 100,000 homes. Satya Nadella (Microsoft CEO) nailed it: “We have the compute. We just don’t have the power to plug it into.” You can buy GPUs. You can’t buy electricity. At least not fast enough. So the leverage quietly shifted—from silicon to supply. How long will this last, who knows. Could this change? Absolutely. But not at this very moment”.
And a couple of stragglers which I ‘missed off the list’, the first on Claude Code’s agentic qualities of being a “repeatable loop that can read context, plan, take actions, verify results, and keep going….a general execution interface for knowledge work”:
https://open.substack.com/pub/neuralfoundry/p/claude-code-is-taking-over-everything
And on Apple choosing Google Gemini to power Siri:
https://open.substack.com/pub/fundaai/p/researchgoog-google-gemini-may-become
Another win for Alphabet with TPUs, Gemini 3.0, AI search summaries, Waymo tie in, Google cloud, DeepMind, AlphaFold, and all the rest. A good year for Google.
@Delta Hedge — Evening! You write:
My experience, reading, and take so far remains… uncertain. The only people I know who are persistently reporting productivity boosts from AI without it damaging their output are programmers. I know plenty of others who are using AI but who I feel are basically swapping one set of issues for another (e.g. the AI produces something, but then they spend loads of time fixing it, or not thinking harder about a cleaner neater solution that would have saved them more time on doing it) or their output is suffering (here I’m thinking mostly of writers).
With that said I think AI is having an impact at the margin. Even as an extra good search tool it’s helpful. It must be increasing output for, say, people who have to produce a lot of rote copy for product descriptions. (I had a story last week the links about the death of the copywriter).
Three years in I don’t know anyone who has lost their job to AI, seen anyone lose their job to AI, or faced that threat very viscerally in reality (versus the potential).
This is not to say there isn’t disruption. Blogs are certainly being disrupted away by AI! But I’m not sure substitution is the same as a productivity boost? (Well I’m sure it’s not but you take my point).
TLDR something is happening but it isn’t (yet) as big as the hullabaloo, IMHO.
Evening @TI.
Direct knowledge is a dangerous thing given (as Morgan Hounsel reminds us) that we each sample just 0.000000001% (1 exp minus 11) of the lived experiences of the 117 bn odd who’ve ever lived, but….that said: even though it’s clearly not at all what any of us ever though AGI would be, and, on any reasonable and fair minded view, it isn’t (at least yet) anywhere even remotely near to what a truly generalisable AI should be capable of; the top paid tier frontier LLMs still deserve more credit for what they demonstrably *can* do than what they’re currently getting from most people most of the time.
I’m using mid tier (£24 pcm) Copilot to do my self assessment this month. It seems to be getting it right (have to make sure all the docs are OCR’d first of course).
Just on educational YouTube today, ChatGPT 5.1 scores 88% on 2nd undergrad year quantum physics paper in just 30 seconds for a 3 hour paper:
https://youtu.be/JcQPAZP7-sE?si=vTd02DjjoTpLWeIh
How is it possible that this tech’s not having a positive effect on productivity? It beggars belief.
I’m not saying (unlike Elon) that this is the singularity. It certainly ain’t that.
But it is something, and probably something really quite important.
I am sure many companies will be able to downsize payroll and boost the bottom line P&L very soon. Whether they’ve the gumption and the brass neck to do so might turn out to be a different matter.
Of course, the socioeconomic and public finance effects (less payroll tax) *if* we see mass ‘head shed’ could be devastating if it’s handled badly.
But that doesn’t mean that the rate of profit in GDP can’t go up a lot even as a result of only mundane utility models. The aggregate profit margin is probably no longer a mean reverting series.
Of course, none of this means LRMs/LLMs can find out anything truly new (though they might, see AlphaFold and DeepMind). And without genuine innovation it’s an open question whether you can get long term economic growth out of ML. But you probably can reduce the labour share of output, make the Gini coefficient worse, make a few people very rich indeed, and raise the IRR for shareholders.
And of course the first to ‘lose’ their job will be the young would be starters who don’t ever get taken on to begin with because the cheapest lay off is the one you don’t make because you freeze recruitment. I think this is demonstrably what we’re starting to see in graduate employment stats.
My cohort (the fifty plus brigade) will be in the firing line eventually I guess, but at least I’m expensive to make redundant (21 months’ pay).
@Delta Hedge — They can definitely do *something*. The question is (a) how useful is much of what they do to someone who couldn’t do it already (b) or who doesn’t have to double-check it (c) or who couldn’t outsource it cheaply.
I appreciate your self-assessment tax return was (presumably) just an off-the-cuff example, but you’ll have to check this for errors, you can presumably do it yourself, and my accountant can do mine so easily he chucks it in for free with my limited company accounting. I imagine it takes him about 2-3 minutes, and most of that will be keying in EIS scheme numbers and pressing ‘go’. 😉
Again, I am with you in part. I think it’s clear they will boost productivity to some extent. My total late night gut sense guess is perhaps they’ll make everything 5-10% efficient, averaged across all industry, all jobs, all tasks. That’s (a) very meaningful (b) commercially valuable (c) probably not what’s priced in.
This is not to say we won’t see some kind of breakthrough or that more specialised/niche trained models/instances/apps that start to cut into this and that area.
But (cautiously) I think the revolution is off the table for now, most likely.
Again, don’t get me wrong. I said to my most AI literate friend last night I think there’s still a 5-10% chance (again a total guess) that we could still be at the start of something existentially threatening and humanity-changing. Not because I can see it, but because I can still gasp in awe at a chatbot’s output and wave my hands over my imagination. And I couldn’t do this five years ago when this was still basically just ‘big data’. So it has to be given some kind of trajectory to endgame probability. Something *has* changed.
But for context he thinks on this tech there’s basically no chance. And he knows far more about it than I do.
Of course every word I’ve written above could look incredibly dumb in ten years. Again I don’t dispute something in is potentially in play. (And a 5-10% boost to global GDP is anyway meaningful, depending on the time frames).
“Interesting times” as they say.
p.s. Apols for the typos if you read the first version of this comment over email!
Interesting pointa.
On your a)., if the task is outside the circle of competence/ comfort zone of the person concerned then they’d have to outsource to a professional at much greater cost than a paid up LRM (yet alone a free one).
If the task was something the person could do then the LRM can do it so much quicker even with checking.
Any 130+ IQ undergrad physics savant could get 88% on a 2nd year paper on quantum mechanics but (remembering that the paper used was one which had never been made public, and so couldn’t have been within the model’s pre training set).
But no human could answer the paper at such a solid 1st class (>70%) degree level in matter of seconds (versus the 3 hours allotted the exam).
There must (surely?) be a huge (actual / potential?) speed up for tasks here.
*If* workflows can really be joined up / managed effectively by agentic automation imminently, then I’m finding it hard to see how there’s not then going to be some possibly really quite big productivity boost.
That doesn’t necessarily mean any more revenue/ sales for firms, but it would be disruptive, and it might be important.
On b)., these models are, in practice, becoming a heck of a lot more reliable, accurate and useful over time (so far).
It’s true that they still screw up some tasks and for really compute intense things like image creation they still can mangle the instructions.
But I can’t help feeling like we’ve passed the iPhone moment.
We’re not I think anymore in the Palmtop Palm Pilot / PDA / Apple Newton phase of a ‘nice idea’ but it’s clunky, hard to use, not that useful (of the GPT3 era).
Whether things can continue to improve given scaling walls, energy constraints (especially getting new generation onto the network), financing woes (Oracle CDS spreads) etc is an open question; although, as a general steer to action, I tend not to bet too heavily against progress.
On c). the outsourcing question is fascinating.
It might make outsourcing even cheaper and more useful / reliable because the offshore centre is fully utilising LRMs etc.
On the other hand, fully agentic models (if they do arrive soon, as promised) make automated reshoring / ‘inhousing’ much more economically viable too.
I suppose a virtual task doesn’t really have a location.
Although the comparison is inexact (to say the least), I’m slightly reminded of 1989, in the aftermath of Tiananmen, when my history teacher provoked derision for saying (IIRC, it’s stuck with me) that within 40 years the world would be looking to copy China’s economic model (state capitalism / market socialism). It seemed ridiculous then, with China’s midget peasant based economy, and the communist world then teetering on thr brink of collapse. Less than 37 years on and China leads the world on robotics, manufacturing, exports, high speed rail, fastest urbanisation in history, greatest and swiftest lift of people out of poverty in history, largest PPP GDP in world etc.
Maybe in the early 2060s we will look back on the shaky start to the ML era in 2022/23 in a similar way.
[Extra para break accidentally added in editing between my substantive para’s 3 and 4 above. Apologies for that and for a couple of typos (“thr” etc). Hopefully it all still makes sense.]
@Delta Hedge — Cheers for further thoughts. Again, I am not saying they are not doing *anything*. And a 4% bump to productivity would be very meaningful.
But there is absolutely no way that using a chatbot is like hiring a Phd level physicist or anything close, except perhaps if the role is to answer questions directed at PhD-level physicists.
I agree the errors are down. But they are still there and they can be howlers. Moreover the bots still lack agency, and IMHO it’s going to be hard to translate LLMs into autonomous units that can navigate even a digital space *more cheaply and effectively* than a human.
But will this technology infiltrate all aspects of working/other life? It looks that way. Hence I’m coming down in the ‘human+’ camp at the moment.
In contrast three years ago I thought the huge job losses looked much more possible. But right now they lack the smell test. If people could buy AI employees for £50/£100 a month with the potency of a PhD-level graduate we’d be seeing vast demand and collapsing employment. We’re not currently seeing either. Again we’re seeing *something* but not that.
Time will tell, and another six months will be another ten years in this field!
On the other hand, something to support your view that I just read:
https://www.businessinsider.com/mckinsey-workforce-ai-agents-consulting-industry-bob-sternfels-2026-1
Just as a follow-up, look at this thread on X on using ChatGPT to book flights, which allegedly ‘breaks’ SkyScanner etc:
https://x.com/riyazmd774/status/2010648637622370752
Yes, he apparently gets results. But clearly probing the Chatbot is at least as much effort as going to a booking website and arguably much more.
Now you could argue this will all be wrapped up in a thin-app that replicates the booking app functionality without the need for prompts.
But (a) surely that would benefit the incumbents because they already have brand, distribution etc and (b) likely it would be more expensive than bespoke software.
There’s currently a lot of SaaS under a cloud because of these sorts of fears. But I do wonder… 🙂
Again, there will *definitely* be disruption. (I don’t want my gentle pushback to go down in lore as ‘it’s all nonsense and a fugazi’, because I do believe it’s a disruptive and transformative technology. The question is to what and to what degree…)
All agreed.
It’s a heck of a lot of work / iterating and what not to fully use these tools (and perhaps most aren’t) but, by God, for the right task, when they do work well, then they’re (or at least can be) downright impressive.
And I don’t impress easily on tech. I stayed well clear of dot.com companies and though social media in the aughts to be a profitless scam. More fool me! 😉
I think that the biggest issue is that it’s not obvious (at least me) that it’s a case of the ‘West and then the Rest’ on AI.
China may be a lot closer to realising the benefits of this and interrelated automation tech than the US realises or understands.
I’ll post some links in a second from the last couple of days on this. I mention it here in case it gets auto filtered into moderation (as more than one hypertext per comment / post).
At the risk of sounding like the ‘China Cassandra’ here (“they’re pulling ahead, we’re doomed I tell you! 😉 ), and further to my last point above in my previous post, think there’s good reason to be concerned that the US’s ostensible head start in raw compute is not, in practice, the durable advantage which it’s typically made out to be:
China’s or America’s lead (from YT yesterday, with essay referenced linked immediately below it)?:
https://youtu.be/KKtbq-w4mzg?si=OVrecdf9fOCJniJe
https://kaskaziconsulting.squarespace.com/publications/my-essay-entitled-no-more-moore-so-what-then-for-microchips-nbspand-for-china
The Stargate mythos (also from YT yesterday):
https://youtu.be/K86KWa71aOc?si=r7-GDymRddLshfBC
And no one can verify the US data centre deals anyway so why is the Chinese AI effort said to be opaque?:
https://open.substack.com/pub/davefriedman/p/the-ai-data-center-deals-that-no
And the US tech titans approach is as much theological (a digital God in a desert data centre) as the Chinese is pragmatic and technical:
https://open.substack.com/pub/shanakaanslemperera/p/the-gods-are-being-built-in-the-desert
Meanwhile, and linked to this concern, from Substack today:
DeepSeek is boosting reasoning with ‘sparse compute’:
https://open.substack.com/pub/aidisruption/p/liang-wenfeng-open-sources-memorydeepseek
And how Baidu is optimising the Chinese approach to AI:
https://open.substack.com/pub/hellochinatech/p/baidu-spinoff-valuation-trap
I wonder if the question isn’t so much, for now, outright job losses as changing job structure and roles for those in work.
This one today further presses the point:
https://open.substack.com/pub/shanakaanslemperera/p/the-apprenticeship-severance-ai-eliminated
A duet on Claude Code and Co works:
https://open.substack.com/pub/thezvi/p/claude-coworks
https://open.substack.com/pub/aidisruption/p/anthropic-launches-no-code-edition
For every winner there’s a loser, and with Apple choosing Google’s Gemini over both xAI’s Grok and OpenAI’s ChatGPT, Elon Musk and Sam Altman are the losers for sure:
https://open.substack.com/pub/garymarcus/p/the-rapid-rise-and-slow-decline-of
Moore’s Law as Moore’s Wall for memory, and then trying to climb that wall through logical, vertical, lateral and architectural scaling:
https://open.substack.com/pub/semianalysis/p/interconnects-beyond-copper-1000
Is copper the true ‘biggie’ for AI related materials’ bottlenecks?:
https://open.substack.com/pub/paretoinvestor/p/copper-crisis-2026-supply-shortage
K shaped indeed and is it definitely only coincidental that the bifurcation happens on the release of GPT3???
https://substack.com/@therealrandomwalk/note/c-199197598?r=2kxl2k
A current state of the frontier view of using models:
https://open.substack.com/pub/robotic/p/use-multiple-models
Loyalty to all is a moat for none.
Generate the idea in GPT5.1. Feed it to Claude to tear down. Present both sides to Gemini 3 to evaluate. Ask Grok if anyone’s come up with anything similar on X etc. Explore the research topic literature on Perplexity.
Very grim reading from Woodrow Hertzog and Jessica Silbey introduced by Gary Marcus:
https://open.substack.com/pub/garymarcus/p/how-generative-ai-is-destroying-society
Strong criticisms. Here’s the academic paper (How AI Destroys Institutions) from Boston University School of Law on SSRN:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623
Too plausible I fear.
This is a civic / natural justice / public law perspective.
But there’s a broader tendency of historical thought on the anarchist and revolutionary left that the one and only technology which was truly democracy enhancing and empowering (of the masses both individually and collectively) was the spread of the cheap small arm because it afforded the peasantry a means of force to resist feudal exactions.
All the others, the printing press, radio (propaganda), tanks, etc; all enhanced the power of the centre over the periphery and the mighty over the meek.
Will AI follow that template?
Intel back from the dead?
https://open.substack.com/pub/marklapedus/p/analyst-intels-foundry-unit-wins
AI running the economy hot?:
https://open.substack.com/pub/marklapedus/p/analyst-intels-foundry-unit-wins
“We know we will likely catch some flak from the economist crowd, who will find every possible way to explain this away, but we have a hot take here. It’s possible that the hundreds of billions of dollars we’ve been investing into the “productivity-enhancing-technology-machine” for the past three years is actually…enhancing productivity.”
Yep. 😉
No juice means Nvidia simply can’t deliver:
https://open.substack.com/pub/shanakaanslemperera/p/the-megawatt-mirage-nvidias-36-trillion
“PJM Interconnection, the regional transmission organization that manages the grid serving the critical Northern Virginia “Data Center Alley” region, publishes data that quantifies the impossibility. Datacenters drive ninety-four percent of forecasted load growth in the PJM territory through 2030. Yet interconnection queue times have stretched from under two years in 2008 to more than eight years today.”
The dark side of data centres and of LLMs: “Less reading time. Less human connection. Less stable white collar career ladders. Higher electricity bills. More noise pollution for local communities. More AI spam and AI slop on the internet. Less human and real social media (as if that were even possible). More authoritarian use of AI by Governments, big corporations, tycoons, dictators and elites to serve their own agendas”:
https://open.substack.com/pub/aisupremacy/p/was-2025-the-year-of-the-ai-datacenter
The source for the data in the above “AI Supremacy” piece on American dominance in total AI GPUs, TPUs and NPUs and H100 equivalents:
https://www.linkedin.com/posts/ninaschick_the-truth-is-this-no-other-nation-on-earth-activity-7406332282636124161-0lrW
“What could possibly go wrong” AI as a socioeconomic phenomenon:
https://open.substack.com/pub/adamtooze/p/the-ai-boom-as-a-socio-economic-phenomenon
Copper theme again as a shortfall bottleneck critical material for the AI buildout.
Is Anthropic’s Cowork causing “SaaSmaggedon”?:
https://open.substack.com/pub/offthecharts/p/saasmageddon
*If* it’s an overreaction, then maybe the likes of Constellation Software are worth another look??
A very user friendly and down to Earth guide to better prompts:
https://open.substack.com/pub/ruben/p/how-to-better-use-ai-before-prompting
After nano banana, Google Veo 3.1 for 4k quality AI prompt direct to short video:
https://open.substack.com/pub/aidisruption/p/google-veo-31-ai-tiktok-now-4k-vertical
More on Apple’s Siri choosing Google’s Gemini:
https://open.substack.com/pub/aidisruption/p/google-snags-apple-ai-deal-chasing
Zvi on the AI jobs trilemma, productivity, or full employment or low prices:
https://open.substack.com/pub/thezvi/p/when-will-they-take-our-jobs
A very deep dive on all things ASML:
https://open.substack.com/pub/aryadeniz/p/deep-dive-asml-holding-asml
Go Netherlands!!
UBI or bust:
https://open.substack.com/pub/neuralfoundry/p/ubi-or-were-screwed-part-2
“When people finally got uninterrupted hours to experiment, especially over a holiday lull, the collective denial got harder to maintain. Once you use these tools on a domain you already understand deeply, you stop arguing about whether they are “real,” and you start arguing about what happens when they keep getting better”
Well, we will each find out soon enough if our jobs are ‘safe’.
Adapt work for AI or AI for work?:
https://open.substack.com/pub/amistrongeryet/p/the-new-model-of-software-development
“My timeline is suddenly awash in engineers (including me!) reporting that Claude Code is revolutionizing their work”
“AIs struggle (for now) with large projects, but they can drive the cost of small projects to near zero”
“software engineering is still mired in the steam era. Designed around the strengths and limitations of human engineers, current development practices will soon seem as ridiculous as a factory full of shafts and belts”
“Electric motors didn’t revolutionize manufacturing by doing the job of a steam engine. They did it by enabling a new approach to machine power that eliminated the need for shaft-and-belt systems and multi-story factories. AI is going to revolutionize industries, not by doing traditional jobs in traditional ways, but by enabling new, often more personalized, approaches”
Can and how do we measure the “I” in “AI”?:
https://open.substack.com/pub/aiguide/p/on-evaluating-cognitive-capabilities
@Delta Hedge – Yes, coding is the 100% clear winner from the AI revolution so far. Even my most sceptical informed technology friends are now sold on it being the future of coding. Of course the question is whether it’s the start or (for now) the destination, as we discussed the other day…
@TI #539: Language is just a form of code. Sparsely written software can be more informationally efficient, in a Shannon entropy sense, than natural language; but, if you have high levels of credence in materialism/ physicalism and in reductionism then, whether it’s John Milton and William Shakespeare, or ‘just’ an API, it’s ultimately just “0s” and “1s” rendered in different ways.
Excellent Gresham House lecture (bizarrely starting about 46 minutes in!?) on the history and economics of ‘AI’ from the 1955 proposal for what became the Dartmouth conference the next year (and which, arguably, created the field as a tangible endeavour) onwards:
https://www.youtube.com/live/0HvvPDZoxdA?si=xR-BE0urjpOI8N4w
Does AI have to ride on the coattails of human / evolved biological intelligence or are many pathways to many different types of intelligences possible? The true ‘Alien’ may be the AI (ASI?) one which we ourselves had a hand in creating.
What does Ash say in “Alien”….Ahh yes:
‘I admire it’s purity’.
I can hear Elon’s voice with those words! 😉
Electrify before everything else: the grid versus GPUs:
https://substack.com/@shanakaanslemperera/note/c-199566459?r=62vrvp
Indeed. OpenAI’s sole objective is to become TBTF:
https://albertoromgar.medium.com/you-have-no-idea-how-screwed-openai-is-9481fe33f1db
If you see this @TI you should look at this one:
https://open.substack.com/pub/paretoinvestor/p/grok-is-crushing-the-s-and-p-500
Short sample size so far but Grok crushing other models on stock selection and beating SPY by nearly 3x. HF probably too risk adverse to do this at scale but how long before retail can use an API to connect to their IABkr account and have it execute trades automatically?
That in turn would presumably increase momentum/TF (and raise volatility?)
AI eats SaaS for breakfast:
https://open.substack.com/pub/davefriedman/p/the-saas-selloff-ai-and-interest
Interesting on the margins and cost structure. New billing model needed.
Is AI’s water footprint (and ecological impact) worse than the cattle industry?:
“At 245 gallons per burger, that’s 2.7 billion output tokens per burger (!). Even more, if we assume a daily request number of 30 queries per day and an average output length of 375 tokens, we get to the conclusion that a single burger’s water footprint equals using Grok for 668 years, 30 times a day, every single day.”
https://open.substack.com/pub/semianalysis/p/from-tokens-to-burgers-a-water-footprint?l
Running enterprise workflows in Gemini:
https://open.substack.com/pub/aidisruption/p/building-enterprise-workflows-with
More on Claude Coworks:
https://open.substack.com/pub/thezvi/p/ai-151-while-claude-coworks
Jarvis your AI digital butler from Gemini 3:
https://open.substack.com/pub/aidisruption/p/gemini-awakens-googles-ecosystem
AI bubble watch update, margin debt looks worrying:
https://open.substack.com/pub/russellgclark/p/me-myself-ai
More circular funding, now xAI:
https://parsers.vc/news/260108-ai-titans-surge–xai-secures-billions/
@DH — Thanks for the ongoing links. Would it be possible to keep more of them included in one post please? The reason is every comment post creates a new row entry with all the gubbins in the back-end editor (name, IP, post, date etc) so it’s a lot more to scroll through when moderating, particularly so when on a phone!
Hope that’s not too cramping your style, cheers!
No problemo. Will do. I was just going with one link per post to avoid the spam filter kicking anything with more than one link into the awaiting moderation bucket, and, thereby, hopefully to save you the work of then having to moderate it. I hadn’t realised that it could create work in and of itself !
Thanks! (Yes, moderation is a schlep however it’s sliced. Not half as bad nowadays since I restricted comments to more recent years though…)
A phalanx of new curated AI related links from later today:
SIGNIFICANT MARKET SHIFTS
ChatGPT To Show Adverts After Losing Billions (The Telegraph)
https://www.telegraph.co.uk/business/2026/01/16/chatgpt-to-show-you-adverts-after-losing-billions/
This marks the end of the subsidised generative AI era.
OpenAI’s pivot to advertising confirms that subscription revenues alone cannot offset massive compute costs. For investors, this creates a direct collision with Google’s search monopoly and fundamentally alters the unit economics of the AI sector. Expect volatility in ad-tech stocks as budget allocation shifts toward conversational interfaces.
Google Starts to Eat the World (Neural Foundry)
https://open.substack.com/pub/neuralfoundry/p/google-starts-to-eat-the-world
A signal that the incumbent is successfully leveraging its distribution moat.
Google is moving from defensive maneuvering to aggressive ecosystem lock-in, integrating agentic AI to cannibalise standalone startups. The implication is that “wrapper” companies are doomed; value is consolidating back to the platform layer. This reinforces the bull case for Alphabet as it operationalises its proprietary data advantages to crush fragmentation.
The Death of Software 2.0
https://open.substack.com/pub/mule/p/the-death-of-software-20-a-better
A paradigm shift for the SaaS business model.
We are approaching a tipping point where bespoke software is generated on demand rather than purchased as a static SaaS product. This threatens the recurring revenue moats of traditional B2B software companies. Capital should reallocate toward the infrastructure providers enabling this generation, rather than the application layer which is rapidly commoditizing.
The $40 Billion Disaster That Nobody Is Talking About (Private Markets News)
https://open.substack.com/pub/privatemarketsnews/p/the-40-billion-disaster-that-nobody
A warning on private market valuations.
A critical autopsy of a massive capital destruction event in the private equity/venture space. This serves as a stark reminder that elevated AI valuations require rigorous due diligence. Investors must scrutinize “growth at all costs” narratives, as the market begins to ruthlessly punish burn rates that lack a clear path to profitability.
INFRASTRUCTURE & HARDWARE
AI Needs Electricians: The $300 Pay Boom (AI Disruption)
https://open.substack.com/pub/aidisruption/p/ai-needs-electricians-300-pay-boom
Identifies the hard physical bottleneck of the digital economy.
The constraint on AI scaling is shifting from GPU availability to power delivery. With skilled labor shortages driving electrician wages to $300/hour, data center CapEx estimates must be revised upward. This highlights utilities and infrastructure service firms as essential, defensive plays in the AI portfolio.
Research: AXTI & Indium Phosphide (InP) (Funda AI)
https://open.substack.com/pub/fundaai/p/researchaxti-indium-phosphide-inp
Novel Insight. A specific deepdive into critical optical materials.
As data clusters scale, copper interconnects are failing. This piece outlines the thesis for Indium Phosphide (InP) as the necessary material for next-gen photonics and optical data transfer. AXTI represents a high-beta supply chain play on the inevitable transition to optical networking within the data center.
TSMC: AI Arsenal Builder (App Economy Insights)
https://open.substack.com/pub/appeconomyinsights/p/tsmc-ai-arsenal-builder
A reinforcement of TSMC’s monopoly status. Despite geopolitical risks, TSMC remains the singular bottleneck for silicon innovation. The analysis suggests their pricing power is increasing, protecting margins even as they ramp up massive CapEx for 2 nanometre and 1.4 nanometre processes.
The Connectivity Fabric of the AI Data Center (I Am Fabian)
https://open.substack.com/pub/iamfabian/p/the-connectivity-fabric-of-the-ai
Focuses on the “plumbing” of AI switches, interconnects, and cabling. As GPUs get faster, the network becomes the drag. Investors should look at Broadcom, Marvell, and optical networking firms that solve latency issues in massive clusters.
STRATEGY & ADOPTION
AI and the Age of Individual Empowerment (Big Technology)
https://open.substack.com/pub/bigtechnology/p/ai-and-the-age-of-individual-empowerment
Discusses the rise of the “one person unicorn.” AI tools are increasing individual leverage to the point where small teams can rival incumbent output. This suggests a VC shift toward smaller, leaner startups with lower capital requirements but high operational leverage.
Inside McKinsey’s AI Operating System (The AI Opportunity)
https://open.substack.com/pub/theaiopportunity/p/inside-mckinseys-ai-operating-system
An overview of how enterprise giants are attempting to standardise AI deployment. While useful for understanding corporate adoption cycles, it highlights the friction large firms face compared to agile competitors. Good for gauging the pace of Fortune 500 AI spend.
The AI Manager’s Schedule (Generatives)
https://open.substack.com/pub/generatives/p/the-ai-managers-schedule
Analyses how middle management is evolving. As AI handles coordination and reporting, manager value shifts to high level strategy and human empathy. Investment implication: Short traditional HR tech; long platforms that enable “management as a service.”
Why I Picked Legal AI & How to Build (AI Made Simple)
https://open.substack.com/pub/artificialintelligencemadesimple/p/why-i-picked-legal-ai-how-to-build
A case study on Vertical AI. Legal tech remains one of the highest ROI sectors for GenAI due to high text volume and billable hour structures. Offers a template for evaluating other vertical specific AI investments.
GEOPOLITICS & CONSUMER
ByteDance AI Glasses: Strategic Denial (Hello China Tech)
https://open.substack.com/pub/hellochinatech/p/bytedance-ai-glasses-strategic-denial
ByteDance is aggressively entering the hardware space to rival Meta. This signals that the battle for the “face” (smart glasses) is the next consumer frontier. Investors should watch for regulatory headwinds regarding Chinese hardware in Western markets.
ChatGPT’s Google Translate Feels Unnecessary (AI Disruption)
https://open.substack.com/pub/aidisruption/p/chatgpts-google-translate-feels-unnecessary
Real time, multimodal translation is now a commodity feature, not a standalone product. This spells trouble for single-purpose language learning apps and translation devices, viewing them as features within larger models rather than investable moats.
MACRO & MISC
The Quiet Tax Change That Could Reshape Portfolios (Nicholas Vardy)
https://open.substack.com/pub/nicholasvardy/p/the-quiet-tax-change-that-could-reshape
Discusses fiscal policy adjustments that may impact capital gains or sector specific write offs. While jurisdiction specific, it’s a reminder to optimise portfolios for tax efficiency as governments look to raise revenue from the tech boom.
Clouded Judgement: Platform Engineering (Clouded Judgement)
https://open.substack.com/pub/cloudedjudgement/p/clouded-judgement-11626-platform
A technical look at SaaS metrics and platform engineering efficiency. Useful for benchmarking cloud infrastructure stocks, specifically regarding how efficiently they are converting R&D spend into deployable code.
A quick quartet of new links.
The start of 2026 is defined by a rapid “product war” rather than just model updates. Anthropic’s “Cowork”, built using its own “Claude Code” tool, which has already hit $1B in revenue, aims to revolutionise desktop autonomy.
Simultaneously, Google is leveraging its ecosystem advantage with “Personal Intelligence” to reason across Gmail and Drive, while OpenAI introduces ads.
The shift toward actionable, agentic AI and the impending IPOs will define the industry’s financial future:
https://open.substack.com/pub/aisupremacy/p/anthropic-cowork-personal-intelligence-ads-chatgpt-the-ipo-wars-of-ai
AI is Hitting a Measurement Wall: This article argues that current AI benchmarks are hitting a “physics limit.”
By relying on discrete, binary measurements (tokens, pass/fail), we destroy the nuanced, probabilistic information that defines true intelligence, similar to how biology computes efficiently using noise in “sub Landauer” domains.
We are effectively flying blind, potentially missing capabilities (or dangers) that exist below our current detection thresholds.
To progress toward AGI, the industry must pivot from brute force digital scaling (FLOPS) to thermodynamic efficiency (Joules/Op) and analog architectures that embrace, rather than suppress, noise:
https://open.substack.com/pub/artificialintelligencemadesimple/p/ai-is-hitting-a-measurement-wall
There’s a “Cost of Perfection” Paradox. Following the argument about biological efficiency, a critical, overlooked consequence of our current measurement standards is the massive energy penalty incurred by error correction.
Digital systems spend the vast majority of their energy forcing noisy signals into perfect “0s” and “1s” to satisfy our demand for deterministic, reproducible benchmarks.
By insisting on bit-perfect precision for evaluation, we are effectively banning energy-l efficient analog or neuromorphic architectures that operate probabilistically.
We are measuring “computational compliance” rather than “functional intelligence,” forcing the industry to remain on a thermodynamically unsustainable path simply because our rulers cannot measure “noisy” success.
Thie underscores the failure of the “Quantisation” of Reasoning: Current benchmarks suffer from severe information loss by treating high dimensional continuous thought as low dimensional discrete data.
When we grade a model on a binary “pass/fail” basis (e.g., did it pick option C?), we discard the trajectory of the reasoning.
A model might possess a superior, nuanced understanding of a problem but fail a rigid benchmark due to a trivial formatting error or a creative deviation.
This suggests we are likely discarding architectures that are actually smarter (better at generalising in semantic space) because they are worse at test taking (formatting discrete tokens), leading to a distortion in which models get funded and deployed.
All this creates a Benchmark to Architecture Feedback Loop. The “Measurement Wall” is a dangerous self fulfilling prophecy: because we only measure static, distinct capabilities (like rote memorisation or arithmetic in GSM8K), we narrowly optimise architectures (specifically Transformers) to excel at these specific tests.
This creates an evolutionary dead end. We may not be hitting a limit of “AI” broadly, but rather a limit of Transformer intelligence as defined by current leaderboards.
This metric fixation blinds researchers to alternative architectures (like State Space Models or hybrids) that might be inferior at static benchmarks but vastly superior at dynamic, real world adaptation; capabilities our current yardsticks are invisible to.
Which Companies Will Survive AI Disruption?: Disputing the view that AI simply kills software companies and boosts chipmakers, this analysis argues that survival depends on “boring” fundamentals like capital allocation:
https://open.substack.com/pub/harveysawikin/p/which-companies-will-survive-ai-disruption
Using the divergence between Progressive (which invested in tech) and Frontier (which ignored infrastructure for dividends) as a case study, it suggests that winners and losers will emerge in every sector.
The key differentiator isn’t the industry, but whether a company uses its balance sheet to invest in AI as a competitive advantage or merely stagnates.
Q4 ’25 Foundry Earnings: Hit or Miss?: TSMC’s Q4 2025 results confirm that AI demand is real and accelerating, with revenue beating estimates and growing 36% year over year.
Driven by insatiable orders from Nvidia, Broadcom, and Google, TSMC is aggressively ramping up 2nm (N2) production and expanding advanced packaging (CoWoS) capacity, which remains in tight supply.
With a massive $52–56 billion capital budget for 2026, the semiconductor supply chain is fully committing to the hardware requirements of the next generation of AI models:
https://open.substack.com/pub/marklapedus/p/q4-25-foundry-earnings-hit-or-miss
Interesting to see the Agglomerations piece in today’s as always excellent MV w/e links on the purported non effect (at least yet) of ML(/’AI’) on employment.
Can’t say that I’m actually convinced by it, but it puts the case for the complex causation well and is a welcome evaluation, giving nuance to an unclear and at this time probably unfalisiable set of scenarios for what’s going on and why.
One area which *is* however being impacted right now (bigly, as the Donald would say) is SaaS whose wall poster casualty is (mixing metaphors again) this train wreck:
https://open.substack.com/pub/davefriedman/p/the-saas-selloff-ai-disruption-in
With the broader SaaS drawdown (due to AI?) here:
https://substack.com/@reboundcapital/note/c-201240216?r=2kxl2k
And here’s the 2026 outlook from the AI futures project:
https://open.substack.com/pub/aifutures1/p/forecast-how-ai-will-progress-in
With a forecast data centre Capex of $520 bn ($10 bn per week) from just the top 4 hyperscalers!!:
https://substack.com/@onveston/note/c-201251030?r=2kxl2k
More pessimism on jobs and ‘AI’:
https://www.telegraph.co.uk/money/jobs/schools-universities/computer-science-graduates-cant-get-job/
With Richard Murphy on how the AI hype cycle could do the seemingly impossible under orthodox economic theory and push up inflation, rates, unemployment and cause austerity all at once:
https://youtu.be/8QciSYz8VyE?si=1VG2l9SWdT0847A-
But, bear in mind here that LLM’s/LRM’s are definitely not just turbo search:
https://substack.com/@neuralfoundry/note/c-201174680?r=2kxl2k
On the race to innovate beyond the ‘more powerful processor’ paradigm, Chinese customs block Trump cleared Nvidia H200 GPUs
https://www.theguardian.com/technology/2026/jan/17/china-blocks-nvidia-h200-ai-chips-that-us-government-cleared-for-export-report
And some neat Chinese ‘AI’ execution here:
https://open.substack.com/pub/hellochinatech/p/alibaba-qwen-ai-execution-race
But problems lie ahead:
https://hellochinatech.com/p/china-ai-fast-follower-trap
And maybe it’s all in vain if they’re caught in a deflationary death trap:
https://open.substack.com/pub/shanakaanslemperera/p/chinas-trillion-dollar-illusion
And there’s no guarantee of returns for anyone from this tech;
https://www.theguardian.com/technology/2026/jan/17/why-trillions-dollars-risk-no-guarantee-ai-reward
Facepalm moment. PLTR is an AI company not a SaaS one:
https://substack.com/@sergeycyw/note/c-201479074?r=2kxl2k
PLTR revenue growth literally off the charts on this, but Duolingo and Crowdstike sit at sweet spots:
https://substack.com/@sergeycyw/note/c-201106932?r=2kxl2k
Still, the fundamental point is sound. AI fear has made the sector ‘cheap’ on typical ‘rearview mirror’ backwards looking metrics:
https://substack.com/@sergeycyw/note/c-201148467?r=2kxl2k
What’s priced in:
https://open.substack.com/pub/tradesandgains/p/wall-streets-global-gamble-ai-chips
This one goes with the AI scapegoat thesis for those job losses (couple of month’s old now):
https://open.substack.com/pub/europecapitalnews/p/the-new-scapegoat-how-ai-became-the
And I see they share my long term bullish Alphabet outlook (this from August, I’m now going to add them to my Substack 200+ long list of free ‘subscriptions’, although, of course, that might just be an example of my own confirmation bias 😉 ):
https://open.substack.com/pub/europecapitalnews/p/alphabets-crossroads-navigating-the
In an AI arm’s race maybe the only winning move is not to play?
https://open.substack.com/pub/europecapitalnews/p/is-the-us-winning-the-3-trillion
And, before reading this one, I’d kinda mentally written Meta off on AI, but now maybe I need to rethink?:
https://open.substack.com/pub/investinquality/p/meta-platforms-is-the-market-underestimating
Absolutely brilliant piece by Michael Green on the infrastructure limits of AI on growth:
https://open.substack.com/pub/michaelwgreen/p/the-thermodynamic-margin-call
Which industries are most exposed to AI disruption:
https://open.substack.com/pub/qualitystocks/p/thematic-opportunities-can-these
Ads on OpenAI. They’re losing. No moats. Not enough revenue. Too much debt. Not enough compute. Losing market share. Losing best staff. With Oracle they’re the bear case:
https://open.substack.com/pub/aidisruption/p/chatgpt-adds-ads-no-escape-for-8
Productivity and AI:
https://www.telegraph.co.uk/business/2026/01/18/silicon-valley-misfits-fixing-worlds-productivity-slump/
Palantir and AI, the origin story:
https://open.substack.com/pub/theaiopportunity/p/the-palantir-origin-playbook-20032013
Impressive: “Code named FastRender, the project produced over 3 million lines of code, with a core rendering engine written in Rust from the ground up, even including a custom JavaScript virtual machine”, and all in just 168 hours from a standing start:
https://open.substack.com/pub/aidisruption/p/cursor-ai-writes-3m-lines-in-168
No AI shortcuts with agents:
https://open.substack.com/pub/tylerfolkman/p/your-ai-agents-are-skipping-v1-thats
And please don’t ask transformers to write your prompt:
https://open.substack.com/pub/ruben/p/your-prompt-sucks
I leave updating the AI (related and themed) links for a day and I’m swamped. And this is just a sample. I’ll be keeping the intro’s brief on this one in the interests of time.
Day 46: ADBE—Still Alive!: A deep-dive on Adobe (ADBE): despite lagging stock performance, the software giant still grows revenue and cash flows, and may be undervalued if generative AI expands its core creative workflows.
https://open.substack.com/pub/diyinvestor1/p/day-46-adbestill-alive
The Sequoia AI Ascent Deck: A curated 20 slide deck from Sequoia’s AI Ascent outlining what’s truly changing in AI, where value will be captured, and how founders should position products in the age of agents and reasoning systems.
https://open.substack.com/pub/theaiopportunity/p/the-sequoia-ai-ascent-deck
ChatGPT Self Portrait: A playful look at how people are prompting ChatGPT to “draw itself” and what the results reveal about biases, framing effects, and human AI interaction quirks.
https://open.substack.com/pub/thezvi/p/chatgpt-self-portrait
Claude Cowork Now Has Permanent Memory: Anthropic’s Claude Cowork mode is reportedly getting persistent memory, letting the AI retain project and user context over time and act more like a long-term collaborator.
https://open.substack.com/pub/aidisruption/p/claude-cowork-now-has-permanent-memory
The $10 Billion Amazon “Failure” That May Power the Next AI Wave: A re-assessment of Amazon Alexa, arguing its large installed base and new AI voice layer could turn a former disappointment into a competitive AI interface and strategic asset.
https://open.substack.com/pub/nicholasvardy/p/the-10-billion-amazon-failure-that
Inside Texas’s AI Data Center Queue: An investigation into the massive 226 GW of proposed data center electricity demand in Texas and what it really says about the AI infrastructure boom versus speculative proposals.
https://open.substack.com/pub/davefriedman/p/inside-texass-ai-data-center-queue
Zhipu’s New Model Also Uses DeepSeek’s MLA, Runs on Apple M5: News on Zhipu’s new open-source lightweight language model (GLM-4.7-Flash) featuring a mixture-of-experts architecture, free API access, and Apple M-series local execution.
https://open.substack.com/pub/aidisruption/p/zhipus-new-model-also-uses-deepseeks
Stock of the Week: Taiwan Semiconductor: A market-focused piece on Taiwan Semiconductor Manufacturing Company (TSMC), highlighting record earnings driven by AI chip demand and ongoing capacity expansion.
https://open.substack.com/pub/qualitystocks/p/stock-of-the-week-taiwan-semiconductor
Machine Gains, Human Pains: A macro snapshot of AI’s broader impacts on labor, supply chains, productivity, and markets, showing both technological gains and socioeconomic frictions.
https://open.substack.com/pub/offthecharts/p/machine-gains-human-pains
AI Report Nuggets and Commentary Early 2026: A roundup of key AI industry reports with commentary on market concentration, hype versus fundamentals, infrastructure bottlenecks, and broader economic implications.
https://open.substack.com/pub/aisupremacy/p/ai-report-nuggets-and-commentary-2026-ai-trends
ImportAI #441 – My agents are working: Jack Clark reflects on using AI research agents to autonomously scan papers, compile insights and build tools while freeing up human time, illustrating how agents multiply individual productivity.
https://open.substack.com/pub/importai/p/import-ai-441-my-agents-are-working
Davos Dispatch: Is AI the New Altruism?: A report from Davos examining whether AI’s hype as a force for social good at the World Economic Forum is genuine impact or mostly rhetoric from tech leaders.
https://open.substack.com/pub/bigtechnology/p/davos-dispatch-is-ai-the-new-altruism
Anthropic: 12× Faster but Your Job at Risk: discussing Anthropic’s claims of highly accelerated AI performance and the implications for job displacement and workforce automation.
https://open.substack.com/pub/aidisruption/p/12x-faster-but-your-job-at-risk-anthropics
Alibaba Qwen Agent & Taobao Economics: Describes how Alibaba’s Qwen agent and integrations with Taobao/Alipay are reshaping ecommerce and creating new AI-driven workflows in China’s online economy.
https://open.substack.com/pub/hellochinatech/p/alibaba-qwen-agent-taobao-economics
The SaaS Suck – Arcadia Now: A critique of the current state of SaaS businesses/platforms, exploring why many fail to deliver value and the structural issues plaguing the sector.
https://open.substack.com/pub/arcadianow/p/the-saas-suck
Valuing AI – Russell G. Clark: An analytical piece on how to meaningfully value AI startups and technologies beyond headline multiples, likely touching on revenue models and economic impact.
https://open.substack.com/pub/russellgclark/p/valuing-ai
The Great Bifurcation: How a Broken… : examines socioeconomic or tech market splits (e.g., winners vs losers) driven by emerging technologies and systemic dysfunction.
https://open.substack.com/pub/shanakaanslemperera/p/the-great-bifurcation-how-a-broken
The SaaScopalypse Part 3 – Multibagger Nuggets: A continuation of a series critiquing the SaaS industry’s downturn, sequencing why many SaaS valuations and growth narratives are unraveling.
https://open.substack.com/pub/multibaggernuggets/p/the-saascopalypse-part-3
Micron Buys Fab from Taiwan’s Powerchip: Mark Lapedus discussion of Micron’s strategic acquisition of Powerchip’s Taiwanese fabrication facility to scale DRAM output amid surging AI-driven memory demand (focus on memory supply/industry dynamics; current news confirms the deal.)
https://open.substack.com/pub/marklapedus/p/micron-buys-fab-from-taiwans-powerchip
All about Gemini’s conductor.
https://open.substack.com/pub/aidisruption/p/gemini-cli-conductor-context-driven
And more on the Capex accounting shenanigans.
https://open.substack.com/pub/davefriedman/p/the-176-billion-accounting-question
There are many takeaways here but in particular note:
– Only (less than) 2% of the Texas data centre driven 226 Gwh requirement is actually connected: to wit: “Texas’s all-time peak electricity demand, set during a scorching August 2023 afternoon, was 85.5 gigawatts. The state’s total available generation capacity is around 103 gigawatts. Companies are now asking to connect loads totaling 2.6 times the state’s record peak demand, and 73% of those requests are from data centers.” 1 Gwh = 750,000 homes. Something’s gotta give.
-Plenty above on the jobs v AI theme / a productivity ‘revolution’ upon us.
– Not looking good for SaaS but maybe the sell offs overdone, or may not, given LLM alternatives, and the sell off is just overdue??
Latest batch of AI themed pieces from the last 48 hours:
Investing in the Age of Extremes (Market Sentiment)
Analyses current market polarization, comparing today’s high valuations and tech concentration to historical bubbles, advising investors on navigating a “bipolar” market environment.
https://open.substack.com/pub/marketsentiment/p/investing-in-the-age-of-extremes
Further Breaking News Further Vindicating (Gary Marcus)
Marcus argues that recent admissions by AI leaders regarding “scaling walls” vindicate his long-standing skepticism about the limits of Large Language Models and deep learning.
https://open.substack.com/pub/garymarcus/p/further-breaking-news-further-vindicating
Did the Market Just Call the Top? (Nicholas Vardy)
Vardy examines technical indicators and sentiment data suggesting the current bull market may have peaked, warning investors to prepare for potential downside risks and volatility.
https://open.substack.com/pub/nicholasvardy/p/did-the-market-just-call-the-top
AI Deniers’ Ideas are Cheap (Michael Wiggins De Oliveira)
Criticises AI skepticism, arguing that betting against the technology is costly. He highlights specific investment opportunities where AI adoption is driving tangible financial outperformance.
https://open.substack.com/pub/michaelwigginsdeoliveira/p/ai-deniers-ideas-are-cheap-outperformance
571 Fooled, Runway Gen 4.5 Out (AI Disruption)
Covers the release of Runway’s Gen-4.5 video model and discusses a recent study where AI generated content successfully fooled a significant majority of human evaluators.
https://open.substack.com/pub/aidisruption/p/571-fooled-runway-gen-45-out
The Verification Inversion (Shanaka Anslem Perera)
Perera explores the growing asymmetry between cheap content generation and expensive verification, arguing that the cost of discerning truth will skyrocket as AI scales.
https://open.substack.com/pub/shanakaanslemperera/p/the-verification-inversion
AI Race, Iranian Malaise, How Tudor… (Adam Tooze)
In the intensifying global AI arms race, China flips the US on downloads
https://open.substack.com/pub/adamtooze/p/ai-race-iranian-malaise-how-tudor
The Energy Singularity (Shanaka Anslem Perera)
Discusses massive energy demands of scaling AI infrastructure, predicting a pivotal moment where physical power constraints will dictate the pace of future technological advancement.
https://open.substack.com/pub/shanakaanslemperera/p/the-energy-singularity-8f0
The Internet is Turning Against OpenAI 2026 (AI Supremacy)
Chronicles shifting public sentiment against OpenAI, citing rising concerns over copyright data usage, lack of transparency, and the company’s move away from its non profit roots.
https://open.substack.com/pub/aisupremacy/p/the-internet-is-turning-against-openai-2026
Breaking: Sir Demis Hassabis Becomes… (Gary Marcus)
Marcus highlights recent comments by DeepMind’s Demis Hassabis regarding the limitations of current generative AI, interpreting them as high-level confirmation of the “scaling wall” hypothesis.
https://open.substack.com/pub/garymarcus/p/breaking-sir-demis-hassabis-becomes
The Robots are Coming and Wall Street… (Shanaka Anslem Perera)
Analyses the intersection of humanoid robotics and financial markets, predicting how the commercialisation of physical AI agents will disrupt labor markets and drive investment trends.
https://open.substack.com/pub/shanakaanslemperera/p/the-robots-are-coming-and-wall-street
Day 47: Quick Valuation Tech/SaaS Companies (DIY Investor)
Provides a concise valuation framework for Tech and SaaS companies, offering specific metrics and ratios to help individual investors assess fair market value during earnings season.
https://open.substack.com/pub/diyinvestor1/p/day-47-quick-valuation-techsaas-companies
My Thoughts on the “Masaasacre” (Manu Invests)
Offers a personal perspective on the recent sharp sell off in SaaS stocks, evaluating whether the downturn represents a capitulation event or a fundamental sector reset.
https://open.substack.com/pub/manuinvests/p/my-thoughts-on-the-masaasacre-my
Big Ideas 2026 (ARK Invest)
ARK’s annual research report detailing their high-conviction investment themes for 2026, focusing on disruptive innovation in AI, multiomics, reusable rockets, and autonomous logistics.
https://www.ark-invest.com/thank-you-Big-Ideas-2026
R1 Turns One, DeepSeek Model 1 Emerges (AI Disruption)
Reports on the anniversary of the R1 model and the emergence of DeepSeek Model 1, highlighting the rapid advancement and competitive performance of Chinese open weights AI.
https://open.substack.com/pub/aidisruption/p/r1-turns-one-deepseek-model-1-emerges
Real Signal China VC 2025 (Hello China Tech)
Surveying the 2025 landscape of Chinese Venture Capital, identifying where smart money is flowing, specifically deep tech and AI, despite broader macroeconomic challenges in the region.
https://open.substack.com/pub/hellochinatech/p/real-signal-china-vc-2025
Milestones of China in AI of 2025 (AI Supremacy)
Summarises key 2025 achievements in Chinese AI, specifically focusing on the technical breakthroughs and global adoption of models from DeepSeek and Alibaba’s Qwen.
https://open.substack.com/pub/aisupremacy/p/milestones-of-china-in-ai-of-2025-deepseek-qwen
The Big Short Meets Marcus on AI (Gary Marcus)
Draws parallels between the 2008 housing bubble and the current AI hype cycle, suggesting that overinvestment in generative AI may lead to a similar market correction.
https://open.substack.com/pub/garymarcus/p/the-big-short-meets-marcus-on-ai
BlackSky AI: 15 Daily Views of Global… (RoboPub)
Details BlackSky’s AI capability to provide and use high frequency satellite imagery, allowing for 15 daily views of critical global locations to support real time intelligence and monitoring.
https://open.substack.com/pub/robopub/p/blacksky-ai-15-daily-views-of-global
CoreWeave Revenue Backlog Now Exceeds… (Michael Wiggins De Oliveira)
Reports on CoreWeave’s massive revenue backlog, using it as a bullish signal for the sustained, tangible demand for GPU compute and AI infrastructure.
https://open.substack.com/pub/michaelwigginsdeoliveira/p/coreweave-revenue-backlog-now-exceeds
The Thermodynamic Reckoning: OpenAI (Shanaka Anslem Perera)
Argues that OpenAI faces a “thermodynamic reckoning,” where physical constraints regarding heat dissipation and energy consumption will severely bottleneck further model scaling efforts.
https://open.substack.com/pub/shanakaanslemperera/p/the-thermodynamic-reckoning-openai
Raincheck:
1). We’re nowhere near AGI, less still ASI.
2). Pure parameter scaling *might* (just might) get to AGI (eventually), but compute scaling won’t. The latter needs 10x computational increases for 10% entropy loss reduction. An impossible approach, physically speaking.
3). Parameter scaling would need on the order of 10exp26 (or more) parameters for full AGI (better than humans across all cognitive domains), against still just 5x10exp12 parameters for the leading frontier Gemini models now. Even assuming (perhaps optimistically) six orders of magnitude (a million fold) of algorithmic improvement (effective FLOP to raw FLOP) and four orders magnitude (10,000x) improvement in hardware efficiency (raw FLOP/joule); then we’d still need over 2,000x more power to data centres than now, i.e. hundreds of terrawatts of data centre consumption, compared to global total energy usage of just 10 terrawatts (from 20 terrawatts of production) now. Not going to happen, at least anytime soon.
4). Indeed, ambitious goals of 250 gigawatts by 2032 (OpenAI), equal to India’s entire electric generation, imply, at a current $15 bn per gigawatt for the power plant and $35 bn per gigawatt for the data centre, Sam Altman’s crew spending $12.5 tn in the next 7 years. Also not going to happen. And that’s still anything from manyvhundreds up to thousands of times less energy than would be needed for parameter ‘brute force’ scaling to AGI, if that even works, which it might or might not.
5). We’re actually struggling even with more modest targets of 30-35 gigawatts in 2030, with the delays to grid hook ups for renewables and getting planning etc for gas and nuclear.
6). And that’s before we get into issues or using synthetic data in pre-training, which may or may not work (probably not, but who knows?)
7). If we look at pure non neural net approaches to AGI/ASI, then really we’re back where we were in 2017, FAPP, when LLMs first became ‘a thing’; i.e. just like back then, it seems that we’re still likely decades away from real AGI, if we ever get it. Ilya Sutskever thinks a $3 bn p.a. research effort into hybrid neuro symbolic / formal logic reasoning AI might take 5 to 20 years to get to AGI if it can be made to work. Before GPT3 released in autumn 2022, the consensus/median ‘expert’ forecast for AGI was not much before 2045 (if this century, or indeed ever). That fits the 20 year end of Sutskever’s range.
8). On the other hand, we do already have some very capable narrow ML systems. Whatever the haters say (they’re wrong ) LLMs are very impressive, insofar as they go. Will they cause hyper growth? I think not. Can they create wholly new knowledge? No, they can’t. Will they lead to accelerating progress? I doubt it very much. Do they fully substitute for human cognitive labour? No they don’t. Are they useless? Absolutely not. Will they have no effect on growth? No. They will raise productivity, but it’s still a hell of a stretch to then think that we’ll go, with just this present tech, from 1% to 2% p.a. (per hour worked) growth in output to 4%, 5%, or even 7% p.a. productivity gains. Does not getting even a 3% to 5% p.a. boost to growth (which is itself still far, far short of the 10x plus hyper growth expected from ASI, i.e. growth of >20% p.a.) mean that the economic impacts of ‘Gen AI’/LLMs are / are going to be trivial? No, the impacts will not be trivial. Mundane utility does not mean trivial utility. I reckon that maybe we’re looking at 0.2%-0.5% extra productivity growth p.a. for a decade or two (my best guess). By 2035 that’s worth $2.5 tn to $5 tn p.a. in extra output globally.
9). Does that mean we’re Ok for the Capex going on and projected for the great AI build out (i.e. $3 tn to $8 tn by 2028-30). Nope. The revenues from model deployment will indeed grow exponentially. But going from low tens billions now to mid hundreds billions annually in recurring revenue in even just five years isn’t either enough or fast enough for Capex already hitting half a trillion annualised and rising fast. Moreover, the revenue may not be high (SaaS like) margins, given the need to continually invest in new GPUs etc and the power requirements. Something is going to break somewhere, and at some point, on the financing.
10). But the market can remain irrational longer than any of us can remain solvent. OpenAI and Oracle, plus certain pockets of Private Lending, are the only obvious immediate turkeys. The hyperscalers can keep pumping hundreds and hundreds of billions into the build out every year for some years to come before they hit problems. It will eventually drag on Magnificent Seven valuations as investor patience wears thin, even as the positive effects of LLMs on productivity and on new business applications becomes more evident. But that doesn’t necessarily mean a crash (the S&P 500 traded range bound from 1964 to 1982), although it *could* herald a crash (as with the Nifty Fifty over 1970-72 or, more recently and infamously, the Nasdaq over 2000-2002).
11). For the moment, the hyperscalers seem content to spend big, albeit with some cracks appearing. As long as they continue to do so, picks and shovel plays will probably hold up. If the taps are turned off, the music stops (mixing my metaphors, yet again). Energy and digital storage have emerged as the most pressing bottlenecks, but the latter has now had a massive run up. That’s not a good look for an investor.
12). Old fashioned diesel generators offer the quickest, cheap and dirty way to get new data centres online. There’s a few value looking US small/micro caps which supply these, but accessing them in an ISA/SIPP on a major (AJB/HL) UK platform is a PINA.
13). Then again, there’s just so much outside US mega cap tech which looks so much more compelling on each of mean reversion and both relative and absolute valuation bases. Brazilian equities on sub 10x PEs, against US ones in 25x valuation space (and much higher for big tech), with signs, perhaps, of the early foothills of a commodities boom forming (copper, lithium, silver breakouts). And the gold price is not exactly a signal of confidence in corporate American future (inflation adjusted) earnings, even as US equities rise (albeit in a trailing RoW fashion).
14). The combined ‘AI’ and energy transition story has investible elements to it if, by “investment”, we mean the Grahamite definition of an operation which provides a sufficient guarantee of return of principal and an adequate return on principal for risk. However, such investable elements likely lie further downstream (of the data centres, the hyperscalers and the model providers themselves).
15). The hyperscalers are still somewhat investable, given the profitability of their ongoing, underlying core operations. But that investability is impaired quite significantly by their Capex. This isn’t a FB/Meta in 2021/22 situation, with the Metaverse that nobody wanted or cared about, and which looked crap. LLMs really do work, in their fashion, and within their limits. But they’re not as revolutionary as electricity, the steam engine, the agricultural revolution, or the first use of controlled fire; as AGI would surely be (ASI, meanwhile, if ever achieved, would, arguably, be the most important event since the first emergence of life on Earth).
This I think underscores the great rotation going on under the bonnet of ‘the market’ these past 12-18 months.
Ex US DM and value orientated EM have both out performed. Precious metals have gone bananas (my Junior Gold Miners ETF is pushing 200% return in that period, and silver is off the charts). Eco Petrol and Petrobras have started, I think, to break higher (maybe I’m just seeing what I want to 😉 Hopefully they’ve gotten a lot further to go.
Big US tech will have its day in the sun again, but, after nearly 15 years of dominance, reality about Capex and the glaring valuation spread to RoW (esp SCV and EM) are providing gravity, at least on a relative basis.
But if there’s a break through on AGI, maybe coming from a new paradigm via Google (easily the most innovative of the hyperscalers) or via China (i.e. a DeepSeek times a thousand scenario), then all bets are well and truly off.
*If* we get aligned ASI then it’ll be aligned with the technocratic oligopoly, Elysium 2.0:
https://youtu.be/zAxBoEkY4d0?si=ldEc6iEY8HIZpVJe
Since the 1980s, the share of US wealth owned or in effect controlled by the top 0.00001% persons (i.e. just 20 to 30 or so individuals, and their families) has risen from 4% to 12%. In a late 22nd century ASI world, that could exceed 99%, maybe we’ll over 99.9% (i.e. if global aggregate wealth can increase thousands to millions of times under a fully human labour substituting, ASI led, automated robotic production, hyper growth scenario).
Even without AGI/ASI, it’s very questionable if the technology will allow what we like to consider democracy to survive ‘intact’:
https://open.substack.com/pub/garymarcus/p/ai-bot-swarms-threaten-to-undermine
Elon Musk, already worth some $750 bn today, and talking of SpaceX (42% ownership, 100% effective control), due to float next year for $1.5 tn, being worth within his lifetime $100 tn, is ascendant:
https://open.substack.com/pub/shanakaanslemperera/p/the-davos-inversion-the-day-elon
And, speaking of the WEF, the Davos and the AI sets have now merged into one mass of momentum:
https://open.substack.com/pub/bigtechnology/p/ai-davos-four-reflections-from-the
Meanwhile, You Tubers going wild again on TSLA for robotics (and FSD/robotaxis and energy storage, the latter being essential for AI and advanced humanoid robotics):
https://youtu.be/UdEAs9jjNQU?si=x-nMd3Yr-ASd2GWg
Even though calmer heads have a more sanguine perspective:
https://open.substack.com/pub/bradmunchen/p/more-tesla-driverless-pr-stunts-in
All considered, *if we can control ASI* (doubtful I’m afraid), should it emerge this century (plausible), then it’ll be birthed by the like of (or the successors to) Google and Anthropic, and not by either xAI or OpenAI (the jokers in the pack?):
https://open.substack.com/pub/neuralfoundry/p/openai-and-xai-are-losing-the-ai
Google’s looking at token processing increases in the 50x to 100x per year range (most recently >3x in 3 months, i.e. ~10,000% annualised). That’s incredible:
https://open.substack.com/pub/apoorv03/p/why-we-invested-in-baseten
Although the future won’t belong to any one model:
https://open.substack.com/pub/cloudedjudgement/p/clouded-judgement-12326-the-year
And explicit monetisation remains the near term challenge:
https://open.substack.com/pub/aidisruption/p/openai-we-may-take-a-cut-of-future
Of course, there’s always an angle to make money, as the recent surge in memory stocks shows:
https://open.substack.com/pub/tmtbreakout/p/sndkmemory-the-perfect-storm-jensen
But, ultimately, shortages and bottlenecks in ‘AI’ will lead to surpluses and to gluts:
https://open.substack.com/pub/arcadianow/p/is-there-a-compute-surplus-on-the
As one active investment newsletter (h/t MMN) put it yesterday: “investors… they’re betting that inflation is coming back. They think the Fed has lost control. Which is why even though the Fed cut rates, long-term bond yields went up. They’re wrong. Inflation has cooled significantly. Official numbers (CPI) sit around 2.7% year-over-year for December 2025, with core at 2.6%. The Fed’s preferred PCE measure is similar, hovering near 2.6–2.8%. Real-time trackers like Truflation are even lower, recently dipping to 1.2%, showing disinflationary trends ahead of the official stats. The disconnect…The bond market pushed the 30-year yield near 4.9%. They’re pricing in a future that AI has already canceled. The real cause…Efficiency. The Fed thinks interest rates control prices. They don’t. Better tools control prices. Look at Lemonade. The insurance company just launched “Autonomous Car Insurance” for Tesla owners using Full Self-Driving mode, slashing per-mile rates by about 50% when FSD is engaged. Why? Not because of a Fed meeting. But because the data shows self-driving miles carry much lower risk—fewer accidents, lower claims. That isn’t a coupon. That’s a paradigm shift. We are seeing this everywhere. AI is doing to services what the assembly line did to the Model T… It makes things better, faster, and cheaper all at once. That is deflation, plain and simple. The bankers are studying history textbooks. The technologists are writing new code, inventing things that never existed before. One group thinks money is about to get worthless; the other is making everything cost less.”
Thanks for the h/t in the w/e Robot Overlord part of the links today @TI. Super appreciated 🙂
@Delta Hedge — You’re welcome. It’s such a good resource but not sure how many are as interested as we are. Probably needs promoting away from Monevator. If some AI blogger noticed your work and linked we’d be off! 😉
It’s difficult to know who it would appeal to @TI as the ‘field’ here is now so polarised into the AI boosters (most) and the haters (fewer but plenty). I respect Ed Zitron (and agree with him on a great deal of, but by no means all, his analysis), but I’m no hater. The tech is real, for better or worse. But I’m no Tom Nash either. It’s not a surefire way to riches. And (contra Altman until recently) in some sense scaling is (definitionally) a dead end. You can’t scale to AGI at the twentieth power (the twentieth root) of computational increases, or at the tenth root of parameter increases. The first is unphysical (not possible), the latter manifestly implausible. I guess I land with Toby Ord and the EA crowd, who, IIRC, I think I linked to some hundreds of posts back:
https://www.tobyord.com/writing/the-scaling-paradox
Tom Pueyo of Uncharted Territories super bullish on AI progress today (more upbeat than I am tbh):
https://open.substack.com/pub/unchartedterritories/p/ai-in-2026
He was also very bullish on TSLA robotaxis IIRC and quick to dismiss LiDAR / Waymo on costs (a bit too quick, IMHO).
Hoarders, managers and builders, taking the temperature on the tech bros’ AI related singularitarianism after Davos:
https://open.substack.com/pub/exponentialview/p/the-end-of-the-fictions
Whatever the score on the singularity (and those tech and energy learning curves) OpenAI still looks like toast (to me at least):
https://open.substack.com/pub/aisupremacy/p/openai-is-about-to-face-real-competition-enterprise-ai
I may have to end up eating my words here, but if there’s one vector of AI exposure which I don’t want in my investments it’d be to OpenAI post the 2027 IPO. And here’s Altman today shilling for cash from the Gulf:
https://open.substack.com/pub/adamtooze/p/openai-hustling-for-gulf-dollars
Meanwhile the SaaSapocalypse continues:
https://open.substack.com/pub/sergeycyw/p/saas-valuation-weekly-recap-e60
If ML isn’t the end of the sector could be a bargain. If it is though then a value trap awaits.
Personally, I’ve dabbled a bit in Western Union stock as the anti-fintech, anti-Stablecoin, anti-XRP, anti-AI related disruption play in the ($2 quadrillion annual payments’ volume – nearly 20x global output – 3.6 trillion transactions a year) cross border payments sector. One of the lowest PEs in the US (single digits), 10% dividend, share buybacks. It’s trading like a liquidation stock but, when I go into the Post Office, I can’t help but see the prominent WU franchise and people actively using it.
The value is in the disconnect. Streaming and YouTube make terrestrial TV pointless but most people still have a TV and a licence.
Meanwhile the relaxation in the restrictions in exporting H200 GPUs to China might not help Nvidia as much as Mr Market currently thinks:
https://open.substack.com/pub/shanakaanslemperera/p/the-two-gate-trap-45-trillion-positioned
GPTs and the nature of consciousness
https://youtu.be/gLSQ4Hs2_OA?si=1N9euuwBrGiDMAD8
LLMs and SLMs: computation per unit of inference or total inference space?:
https://youtu.be/AVQzG2MY858?si=QeVkrJTXpy36fU6-
Hassabis with a Davos booster on AGI timelines:
https://youtu.be/bgBfobN2A7A?si=PsvHTBBOpqn2ytDo
I don’t think the questioning progress phase ever ended. It feels closer to the end of the beginning of questioning than the beginning of the end. Is it really just more efficient context windows and some basic continual learning (so absent from language models now) or something much and/or many more steps?
Interesting though to hear reiterated in this interview that DeepMind, AlphaFold and AlphaGo lean into hybrid neuro symbolic architectures, with ‘Monte Carlo’s learning; even if Google rejects, as here, formal programming only systems as ever being capable of AGI. Can the narrow domain new learning in the wild of those hybrids generalise effectively to the foundational requirements of continual learning for AGI???
Also revealing that contra Altman’s stance Hassabis readily concedes that the creativity and novelty of AGI requires “far, far” more (and different?) than current models deliver.
More on AI @Davos:
https://open.substack.com/pub/theaiopportunity/p/ai-davos-what-happens-after-ai-takes
All the big beasts interviewed.
Literally on SaaSmaggedon:
https://open.substack.com/pub/renesellmann/p/saasmageddon-csu-almost-buying-wise
And this one was a coin flip on whether to go into the ‘What to do if your queasy about US valuations’ Maven/Mogul piece thread or into this one:
https://youtu.be/JrZZXO0mJm4?si=IBqJhDSXpgWwY_8o
Dispersion within the Magnificent Seven and the convergence on aggregate EPS growth rates between them as a group and the wider ‘S&P 493’ both arguably very bullish signals.
A cheap US listed mega cap tech stock ?! Rarer than hen’s teeth:
https://substack.com/@studios/note/c-204342738?r=62vrvp
The TTM P/E is 37x though, but forward P/E at 11x is what matters for the future, albeit it’s going parabolic which, even though it’s underpinned by fundamentals improving fast, is generally a red flag.
A fascinating look at AI and internet search, referral and connectivity and web crawlers in 2025 from Cloudfare:
https://blog.cloudflare.com/radar-2025-year-in-review/
And a duplex (diptych?), both from today, from the “Artificial Ignorance” Substack on being a ‘manager’ of LLMs versus one of people; and one on the primitives of AI:
https://open.substack.com/pub/generatives/p/the-ai-managers-schedule
https://open.substack.com/pub/generatives/p/skills-tools-and-mcps-whats-the-difference
Economics of AI, lower inference costs but worse business models:
https://substack.com/@scstrategist/note/c-204442204?r=62vrvp
https://open.substack.com/pub/lesbarclays/p/who-captures-the-value-when-ai-inference?l
Per JPM here:
https://am.jpmorgan.com/us/en/asset-management/liq/insights/market-themes/artificial-intelligence/
“J.P. Morgan reports that the cost efficiency frontier has dropped by 99.7%, from $37.50 per million tokens for GPT-4 in March 2023 to $0.14 for GPT-5 Nano by August 2025. The balanced frontier, which optimizes both capability and cost, has improved just as much. SemiAnalysis finds that algorithmic improvements alone can boost efficiency by four to ten times each year. Nvidia, using SemiAnalysis’s InferenceMax v1 benchmarks, shows that software updates for the B200 cut inference costs fivefold in just two months, from $0.11 to $0.02 per million tokens for open-source models”
With task length (eyeballing JPM slides) completed autonomously at 80% success going up four fold last year from 6 minutes to a more practically useful 27 minutes, and with the bank estimating/ reporting (if the latter not clear on source) a very chunky 1.5%-3% p.a. increase to US labour productivity.
AI beyond the note taker:
https://open.substack.com/pub/ruben/p/you-forgot-70-of-yesterdays-meeting?
OpenAI revenue scaling linerally with power consumption and compute. Anthropic clearer path than OpenAI to cashflow positive:
https://open.substack.com/pub/exponentialview/p/ev-558
And in other news, @Mrs DH tells me that the February 2026 (and yet already published, so they live in the future!) “Garden answers” reports (on p.9) that the boffins at Ohio State University have found that “strands in the mushroom mycelium (the vegetative part of fungi) can act as living processors, or memoristors, transmitting electrical impulses like neurons in the brain and storing information with around 90% accuracy”. There’s then a picture of a circuit board hooked up to some large mushrooms. Apparently biodegradable fungal mushrooms are “cheaper than conventional semiconductors that use rare minerals and expensive energy”. Miniaturisation appears to be an issue.
All is not well at Thinking Machines:
https://www.telegraph.co.uk/business/2026/01/25/50bn-valuation-ano-revenues-start-up-that-sums-up-euphoria/
The enthusiasm for AI startups in private markets (Anthropic looking to upround its valuation next funding series from $170 bn to $350 bn and OpenAI looking to raise the valuation on the next round from $500 bn to $830 bn, eying $1 tn IPO next year) underscores risks in public market valuations, i.e. the S&P 500 historically experienced extended periods of underperformance after valuation peaks, including:
1. Underperforming cash from 1966 to 1982, during highly inflationary times.
2. A 0% average return from 1929 to 1949.
3. A lost decade prior to the recent 15 year bull market.
But maybe the ‘bubble’ has *already* burst. Interesting perspective here:
https://youtu.be/6VXhZQ8flz8?si=J4-fjRk-b07KXoI1
Meanwhile, Anthropic blocks third party usage:
https://open.substack.com/pub/aidisruption/p/anthropic-blocks-third-parties-openai
Feels time for some Bear case views (these are a few days to a month old now, but YT’s algo just served them up on the current feed):
Zitron’s take on impending (apparently) data centre underutilisation:
https://youtu.be/rsCGisbz04Y?si=uukPrGjkr63-ZeIx
And Gary on the scaling wall here:
https://youtu.be/aI7XknJJC5Q?si=5IdniepjVWo9rs5V
And AI being a House of Cards here:
https://youtu.be/fUVOP0tWn2U?si=smnQKpBsRiMQMEOi
The ensh*ti**cation of content by AI, a cautionary tale (with sympathy for Lenny Susskind, pioneer of String Theory, whose likeness has been hijacked here by AI slop masquerading as popular science communication):
https://youtube.com/shorts/8CLY0FRHQ00?si=gS6MoLLnmxiYKrsI
Taiwanese reliance on the ‘silicon shield’ (of making the most advanced chips for AI) begins to crumble:
https://open.substack.com/pub/themonentaryskeptic703/p/the-silicon-triangle
Could physical AI usher in a new era for industrial robots?:
https://uk.investing.com/news/stock-market-news/could-physical-ai-usher-in-a-new-era-for-industrial-robots-4466005
The Dean of Valuation, Prof. Damodaran, says that we need to see more revenue from AI:
https://youtu.be/ZE-hqrDRyzg?si=dUPbwHTYOCx4DtpI
And it’s all about memory and Nvidia Rubin GPUs as we enter the post-H200 era:
https://open.substack.com/pub/marklapedus/p/the-latest-news-in-ic-packaging-and-b50
Latest wave of links breaking on my email/Substack shore:
The losers in the AI race:
https://open.substack.com/pub/neuralfoundry/p/openai-and-xai-are-losing-the-ai
A semis mini special round up:
https://open.substack.com/pub/jimmysjournal/p/semiconductor-industry-outlook-2026
https://open.substack.com/pub/thesemiconductornewsletter/p/week-4-2026
https://open.substack.com/pub/thesemiconductornewsletter/p/glass-substrates-as-surprising-new
With some specifics and deep dives for TSMC:
https://open.substack.com/pub/fundaai/p/deepasml-upside-from-tsmc-expansion
https://open.substack.com/pub/crackthemarket/p/tsmc-manufacturing-the-worlds-ai
For Tencent:
https://open.substack.com/pub/hellochinatech/p/enflame-tencent-captive-supplier
And, for the EUV supplier behind the foundries, ASML:
https://open.substack.com/pub/tacticzhazel/p/sell-asml-after-this-100-increase
More on Claude Code:
https://open.substack.com/pub/neuralfoundry/p/claude-code-will-be-the-most-important
And SaaSmaggedon with special reference to roll ups Constellation Software’s (previously rare) epic drawdown:
https://open.substack.com/pub/margarineofsafety/p/softwares-funeral
And the K shaped economy and whether AI is playing a role:
https://www.telegraph.co.uk/business/2026/01/24/trump-may-boast-but-his-two-speed-economy-could-blow-up/
Thinking through Damodaran’s interview linked to above in my immediately last post, if every company has equal access to AI they’ll save costs but then have to compete on price, unless they have a durable (anti-) competitive wide moat (e.g. Meta’s network effects, the non pecuniary switch costs from Apple’s ecosystem); and will thereby lose revenue, resulting in aggregate earnings’ reductions, and, ultimately, margin compression, with lower prices for end consumers/customers and a deflation effect.
But for those companies behind such a moat, they can maintain prices and simultaneously decrease their costs, making them even more dominant.
Altman/OpenAI v Elon/xAI, and worse than I feared. Like the Iran-Iraq war in the eighties, if only they could both lose:
https://youtu.be/csybdOY_CQM?si=TCzpdnrlQfLa1452
Whatever the outcome of that spat, Anthropic / Claude Code seems to be winning ‘bigly’ as the KOTUS would put it:
https://open.substack.com/pub/uncoveralpha/p/anthropics-claude-code-is-having
https://open.substack.com/pub/thezvi/p/claudes-constitutional-structure
https://open.substack.com/pub/aidisruption/p/claude-code-ralph-loop-ai-debugs
But OpenAI’s not out for the count yet. Far from it. 5.8x year on year enterprise revenue growth and a $50 bn (Middle Eastern focussed) raise on a up to $850 bn valuation:
https://open.substack.com/pub/exponentialview/p/data-to-start-your-week-26-01-26
https://open.substack.com/pub/bigtechnology/p/openais-latest-mega-fundraise-big
But recall Wall Street Millennial’s computation that, at a 32.5% share of OpenAI’s equity, MSTF’s $4.1 bn single quarter loss on its investment means that OpenAI’s losses are already ~$50 bn annualised on an ARR of $20 bn!
After the surge in AI data centre related memory stocks, now it’s the turn of data centre optical connectivity to come into the market’s focus.
https://open.substack.com/pub/moneymachinenewsletter/p/why-the-next-ai-winners-wont-be-chips
More developments on GPUs becoming a financial asset class:
https://open.substack.com/pub/davefriedman/p/how-gpus-became-the-newest-financial
All about semis and the supply chain line up:
https://open.substack.com/pub/importai/p/import-ai-442-winners-and-losers
https://open.substack.com/pub/citrini/p/semis-memo-muscle-memory
This could be significant. Quantum computing firm buys a semi foundry. Where’s the synergy / pivot?:
https://open.substack.com/pub/marklapedus/p/quantum-computing-company-acquires
And that data centre is space idea just won’t die:
https://open.substack.com/pub/techfund/p/ai-compute-in-space-economics-and
An AI assistant with access to all your sensitive personal financial data. What could possibly go wrong?:
https://www.telegraph.co.uk/business/2026/01/27/ai-breakthrough-runs-work-and-finances-via-whatsapp/
After all if this can be hacked…..
https://open.substack.com/pub/shanakaanslemperera/p/the-inverted-panopticon
And agentic AI risk is not just on the malware/ security front:
https://www.telegraph.co.uk/business/2026/01/26/ai-jobs-carnage-is-harming-britain-more-than-global-rivals/
Along with Cloudstrike this is the sort of business to benefit:
https://open.substack.com/pub/michaelwigginsdeoliveira/p/nbis-the-market-is-missing-what-comes
And there’s that AI – satellite convergence:
https://open.substack.com/pub/nicholasvardy/p/how-to-think-about-spacex-before
Who takes the gains:
https://youtu.be/aOh2cqTUlKk?si=SLX8NPybXDbZ1Yy4
This one could be the proverbial ‘game changer’:
https://youtu.be/UHum7glkRTs?si=6GWo-wVOaV0VLyYN
Entrepreneurship beyond the information age and into the intelligence era, how do you decide when to delegate to AI agents?:
https://open.substack.com/pub/oneusefulthing/p/management-as-ai-superpower
Apps being built off of Claude Code:
https://open.substack.com/pub/aidisruption/p/vercel-open-sources-ai-browser-automation
And the deeper historical routes of AI, and indeed of computing:
https://youtu.be/_91Om83eKYE?si=jQWbEmLTkE3-Dq4b
A trio from the Beeb on AI today
Victors and carnage:
https://www.bbc.com/news/articles/cr57p2ve8glo
China winning?:
https://www.bbc.com/news/articles/c86v52gv726o
Sight to the blind:
https://www.bbc.com/future/article/20260126-ai-mirrors-are-changing-the-way-blind-people-see-themselves
It all about memory now, not GPUs:
https://open.substack.com/pub/shanakaanslemperera/p/the-limiting-reagent-triad-602-billion
Wither Reddit in the face of the AI storm???:
https://open.substack.com/pub/michaelwigginsdeoliveira/p/reddit-markets-ai-panic-with-40-growth
More innovation, in the semi layer now:
https://open.substack.com/pub/artificialintelligencemadesimple/p/meet-wafer-the-yc-startup-making
Love the deep fake of Sam getting the wrestling treatment. OpenAI really is the weakest link, or co-weakest with Oracle:
https://open.substack.com/pub/neuralfoundry/p/openai-is-getting-crushed-on-all
But it might not matter. We’ve something perhaps more consequential than the ‘Greenspan Put’ for equities for AI now. This PoV from one newsletter (MMN) today: “we are currently in a silent arms race with China for AGI. The U.S. government literally cannot allow the AI infrastructure sector to fail. If private funding dries up, the government will backstop it. Regardless of your point of view, whether you think it’s good or not, it’s reality. This is “Too Big To Fail” 2.0.”
The AI trade is broadening out:
https://open.substack.com/pub/leadlagreport/p/the-ai-trade-is-broadening-and-portfolios
P(Doom) postponed:
https://open.substack.com/pub/aifutures1/p/clarifying-how-our-ai-timelines-forecasts
But still not much time to wake up to the risks:
https://www.theguardian.com/technology/2026/jan/27/wake-up-to-the-risks-of-ai-they-are-almost-here-anthropic-boss-warns
Case study of a winner in the AI enabled cyber security race:
https://open.substack.com/pub/businessinvest/p/115-nrr-23-revenue-growth-31-cash
And is it all about taste not AI outperforming humans?:
https://open.substack.com/pub/theaiopportunity/p/the-business-of-taste
OMG look at the memory stocks go!
Not so much up and to the left as pure parabolic. And I thought my junior gold miners and silver miners ETF was doing well……FOMO? 🙁
Micron (MU) is at $428.84 (+4.53%)
http://uk.investing.com/equities/micron-tech
SanDisk (SNDK) is at $508.50 (+5.62%)
http://uk.investing.com/equities/sandisk-corp
SK Hynix Inc (000660) is at ₩841,000 (+5.13%)
http://uk.investing.com/equities/sk-hynix-inc
Gemini forging ahead with image recognition with DeepMind:
https://open.substack.com/pub/aidisruption/p/gemini-3-launches-agentic-vision
More on Claude’s constitution:
https://open.substack.com/pub/thezvi/p/open-problems-with-claudes-constitution
How fast or slow are things really going with agentic capabilities?:
https://open.substack.com/pub/fundaai/p/deepllm-2026-from-the-illusion-of
China shifting to inference:
https://open.substack.com/pub/hellochinatech/p/china-ai-inference-telecom-operators
Clawdbot hits some IP issues:
https://open.substack.com/pub/aidisruption/p/clawdbot-exposed-built-in-minutes
Vibe coding and SaaS part Deux:
https://open.substack.com/pub/offthecharts/p/vibe-coded-solutions-vs-enterprise
And a new way forward for ChatGPT and spreadsheets:
https://open.substack.com/pub/ruben/p/ai-couldnt-do-excel
Opsie, that should have been ‘up and to the right!’ (in the case of memory stocks, this past month’s more a case of just straight up vertical ascent on the charts).
The Foresight Institute (the grand daddy of the futurist think tanks) has just done a massive AI YT shorts drop:
Anders Sandberg is always entertaining on AI:
https://youtu.be/loiOT7QTorI?si=2iKtbslMZFLpNkuk
David Eagleman on the effect of AI on humans and our brains:
https://youtu.be/psHiTmafKeg?si=NbuRgb3s2bhmkSVY
Whole brain AI emulation?:
https://youtu.be/gUXMZCuk4_Y?si=NYPWkURXZKz4kpZa
Fiduciary AI?:
https://youtu.be/egJAS5fzDw4?si=4Z_CUITBH0wpMBEi
With several others on AI and AGI adjacent themes.
And this freestanding one from a climate scientist on the effect of AI power demands:
https://youtu.be/py0XpKxAnNU?si=791yMUtVg88donSs
Whoaaa
Just catching the market equivalent of the FA Cup and the Euros combined in the post hours earnings for Meta, Microsoft and Tesla on CNBC.
I don’t think this is irrational exuberance a la Greenspan in 1996. MSTF and META decent top and bottom line beats but initials (algo trades) marked down 7% and 5% in (admittedly probably thin / less liquid) after hours trading, although who knows where it’ll land on opening tomorrow, especially given the guidance conference calls tonight are still to go.
Looks though like Meta saying that it’ll spend $115 bn to $135bn in 2026 (calendar year) didn’t (at least initially) land well, regardless of the improving fundamentals. It’s one heck of a lot of wodge for one business to spend on one type of capital expenditure in one twelve month period, that’s for sure.
Some suggestions being made that the recent recapitalisation accounting for Microsoft’s OpenAI stake is not being well received.
It also seems like Mr Market is collectively concerned about OpenAI being 45% of MSTF’s astonishing sales order backlog of $625 bn (holy smokes to that number!)
In any event, whatever is going on under the bonnet, I take heart in the somewhat disappointing reaction. If they’d popped it would show euphoria, which is end of cycle. Falls on top and bottom line beats (versus consensus) are more likely IMHO associated with either early cycle (disbelief leading to climbing the wall of worry) or to after the event popping (rather than immediately before a crescendo).
Let’s hope it’s the latter, especially given that it’s only a combination of Capex and paper wealthy stock owners’ spending which are, together, keeping the wheels turning on the US economy.
If America catches cold then we all do, and they’re not too many places to hide from the collateral damage of a US crash.
Gary on LLMs eating themselves:
https://youtu.be/yex6Ti2VPr0?si=8IHurb9O5yVd-cpc
Data, data, data. But the contrasting ability to learn from little is the human edge, for now.
Continual learning necessary for AGI? Ex OpenAI researcher changed his mind in past year:
https://youtu.be/XtPZGVpbzOE?si=BOtrY1G-X8LSin6b
One bleak view on OpenAI’s solvency:
https://youtu.be/J3pidxrneeQ?si=lX-KyMZtI-U3GVJJ
Thinking Machines dismessbled:
https://youtu.be/kj-SOMwDNhA?si=rafsmO9m-8FJopv-
Lefty Novara media on an AI jobs apocalypse:
https://youtu.be/Y5A_c4pvo7I?si=lts5JbANXNOYdxiF
1999 parallels:
https://www.youtube.com/live/O1aU5ewHATQ?si=4s6DjIOMAnrNOOyG
Deep:
https://open.substack.com/pub/afewthings/p/the-source-code-of-reality-revisiting
Transformer crunch:
https://open.substack.com/pub/themonentaryskeptic703/p/intermediation
Turns out one type of transformer is necessary for the other!
And data, data, data means memory, memory, memory. MMN today: “Dynamic random access memory (DRAM) spot prices just hit a record ~$30…average spot prices for 16GB DDR4 is up over 2,300% from last year to ~$77.”
Data is the new oil:
https://open.substack.com/pub/nicholasvardy/p/memory-is-the-new-oil-why-storage
GPT-5 unit economics:
https://open.substack.com/pub/exponentialview/p/inside-openais-unit-economics-epoch-exponentialview
ASML earnings broken out x3:
https://open.substack.com/pub/massivemoats/p/asml-q4-2025-earnings-review
https://open.substack.com/pub/tacticzhazel/p/asml-q4-earnings-review-update-valuation
https://open.substack.com/pub/fundaai/p/reviewasml-4q25-record-high-bookings
And META’s x2:
https://open.substack.com/pub/fundaai/p/reviewmeta-4q25-drivers-of-1q26-accelerating
https://open.substack.com/pub/jamesfoord/p/meta-crushes-but-im-buying-this-stock
MSTF and that OpenAI dependency:
https://open.substack.com/pub/aidisruption/p/microsoft-makes-76b-in-one-quarter
And all those earnings’ no’s crunched in one place:
https://open.substack.com/pub/sergeycyw/p/meta-servicenow-tesla-microsoft-earnings
Visibility before viability in AI integrated humanoid robots:
https://open.substack.com/pub/hellochinatech/p/china-robot-ipo-rush-2026
GMO seeing the culmination of 300 years’ of bubbles.
https://www.gmo.com/europe/research-library/valuing-ai-extreme-bubble-new-golden-era-or-both_viewpoints/
Recently, the number of countries hitting new 52-week highs reached 47. That’s 67% of the global index. It exceed the previous 2003 peak. Contra GMO, maybe this time it really is different….
The ‘myth’ of the 95% of firms seeing no benefit:
https://open.substack.com/pub/exponentialview/p/how-95-escaped-into-the-world
The bifurcation of AI adoption:
https://open.substack.com/pub/aisupremacy/p/ai-adoption-tells-two-separate-stories-2026
Semis and jobs:
https://open.substack.com/pub/thesemiconductornewsletter/p/semiconductor-industry-and-job-landscape
That Davos interview with Hassabis dissected:
https://open.substack.com/pub/bigtechnology/p/google-deepmind-ceo-demis-hassabis-946
Palantir’s bill of health (as a stock holder it looks stalled now after an epic run):
https://open.substack.com/pub/fundaai/p/previewpltr-4q25-continue-to-accelerate
Zvi’s tour du horizon:
https://open.substack.com/pub/thezvi/p/ai-153-living-documents
Clawdbot limitations and potentials:
https://open.substack.com/pub/aidisruption/p/clawdbot-10000-data-and-tools-247
Misunderstanding Apple’s AI pivot and on device inference:
https://open.substack.com/pub/davefriedman/p/apples-ai-game-is-misunderstood
Again, unit economics dominate.
AI risk infects the bond market:
https://open.substack.com/pub/adamtooze/p/ai-in-the-bond-market-how-the-uk
AI in Space. That xAI and Space X merger:
https://open.substack.com/pub/aisupremacy/p/why-ai-is-going-to-space-spacex-ipo-xai-2026
That still leaves the radiative cooling versus the convective (atmospherical) cooling issue, the Kessler cascade issues and the radiation shielding with the van Allen belt issue (for 640 to 58,000 km altitude orbits).
Ever at an ambition for (eventually) reducing launching costs to $200 / kg with fully reusable Starship launchers (it ain’t a starship that’s for sure, low earth orbit lift system would be more accurate), from $1,000-$2,000 / kg with Space X now (and $10,000-$22,000 / kg launch cost for their competitors) , you cannot use the laws or economics in order to try and outrun the laws of physics.
Stefan Boltzmann’s law rules.
You want to reduce the temperature of the orbital data centre GPU/TPU/NPU cores by a further factor of 10; then with only radiative cooling available you then need the fourth power, therefore 10,000 times the area, and 10,000 times the weight of radiators.
Even at $200 / kg you get to ridiculous weights very quickly.
And those radiators use highly corrosive fluorine etc, which then means the piping is prone to failure and needs high replacement/ maintenance.
Not exactly easy to pull off in orbit.
Why does no-one call Musk out on this?
It’s basic arithmetic, physics, economics and chemistry.
This scenario from Noah Smith is all too plausible:
https://open.substack.com/pub/noahpinion/p/what-if-ai-succeeds-but-openai-fails
The last place which I would want to be as an investor is anywhere near OpenAI or Sam Altman.
At least Oracle has sold off recently, which means it’s likely de-risked a bit compared with recent peak hype.
I don’t think OpenAI would be doing well if it were to be an already listed and trading public stock.
I’m steering well clear of the IPO when it comes.
Everything you need to know about custom silicon:
https://open.substack.com/pub/outperformingthemarket/p/the-ai-deep-dive-the-rise-of-the
A threat to or compliment to Nvidia???
And on the subject of silicon, is this the ‘ultimate’ bottleneck for bleeding edge AI chips?:
https://youtu.be/Y9V4jNTLGus?si=mW4vFgf_2PuWBxxr
I think we’ve a bottleneck in bottlenecks.
The ‘clean’ (i.e. non synthetic) data availablity crunch.
The memory (DRAM) storage shortage.
The electric generation shortfall.
The electrical connection backlog.
Electric transformer unavailability.
Planning delays to data centres.
The (private lending and Sovereign Wealth Funds or national governments to supposedly fill) financing hole, when hyperscaler FCF isn’t alone enough.
Rare earth supply chain (China processing) issues.
And now the uniquely limited availability or ultra pure silicon.
On the subject of the importance of the memory part of the tech stack to data centres:
https://open.substack.com/pub/fundaai/p/reviewsndk-4q25-gross-margin-expansion
And then there’s also the ‘copper barrier’.
We can only get 100 Gigabit / second out of copper wire before signals degrade every few metres.
And there’s a massive long term structural production shortage of copper (30% against pre AI era projected demand) looming after many years of underinvestment in new sources of supply.
I’ve recently dipped my toe into a copper ETF and some producers because of it.
So, the only way ahead on data centre connectivity is an optics revolution in data transmission speed and reliability:
https://open.substack.com/pub/crackthemarket/p/optical-and-networking-supercycle
People on financial / investment Substack seem to be getting quite bullish on Meta again. See previous couplet of Meta result related links in my last post and now this:
https://open.substack.com/pub/artificialintelligencemadesimple/p/why-meta-keeps-outperforming-the
And, finally for this post, an absolutely brilliant piece from Tomas Pueyo (whom I now realise I may have been mistyping as Thomas, sorry about that) of the always excellent (albeit rather structurally AI bullish) ‘Uncharted Territories’:
https://open.substack.com/pub/unchartedterritories/p/ai-algorithms
The quantifiable pace of algorithmic and ‘unhobbling’ improvement in effective computation (FLOPs/Computation/s per joule or $) in recent years is astounding.
This far, far outstrips already impressive (and crushing Moore’s Law) physical computing improvements in raw FLOPs/joule.
Well worth a read.
DeepSeek approved for Nvidia H200s:
https://open.substack.com/pub/hellochinatech/p/deepseek-h200-framework-validation
I wonder if Beijing has been bluffing all along on import restrictions on semis. Best way to get something is to pretend you don’t want it, especially to gullible Yankees…
OpenAI’s game of survival:
https://open.substack.com/pub/aidisruption/p/openais-2026-game-of-survival
The great showmen Altman and Musk are living on borrowed time. When the circus moves on… speaking of which:
https://open.substack.com/pub/ftav/p/what-becomes-of-tesla-when-another
All about ASI Xrisk (p(Doom)):
https://open.substack.com/pub/thezvi/p/on-the-adolescence-of-technology
Agentic land grabs in Microsoft’s workflow wars:
https://open.substack.com/pub/appeconomyinsights/p/microsoft-workflow-wars
Holy Moses, one Deep Value blogger/ Substaker goes ‘all in’ on AI infra stocks:
https://open.substack.com/pub/michaelwigginsdeoliveira/p/strategy-2026-theme-is-taking-shape
Meme bubble top????
AI and Space, the final frontiers (or an endless, all consuming, dark void for VC cash 🙁 )
https://open.substack.com/pub/fundaai/p/deeprdw-selling-shovels-in-the-space
Clawbot becomes Openclaw (?):
https://open.substack.com/pub/aidisruption/p/openclaw-kimi-k25-support-kills-clawdbots
SaaS and AI again. You only die twice:
https://open.substack.com/pub/cloudedjudgement/p/clouded-judgement-13026-software
Don’t forget the Advanced Packaging for chips:
https://open.substack.com/pub/marketsentiment/p/advanced-packaging
Case studies in the AI design wars from Adobe and Microsoft:
https://open.substack.com/pub/davefriedman/p/ai-is-killing-figma-a-capital-structure
Coreweave now TBTF:
https://open.substack.com/pub/neuralfoundry/p/coreweave-cannot-fail
And DJT’s Fed nominee Kevin Walsh completes the hawk to dove transformation (metamorphosis) on the back of a belief that technology makes businesses so much more efficient that prices will drop without Fed intervention to stop inflation.
What could possibly go wrong??
A permanently high plateau of productivity. Where have I heard that one before?
Behind the memory bottleneck/choke point:
https://open.substack.com/pub/benitoz/p/the-dram-squeeze
The AI, Space, Defence nexus:
https://open.substack.com/pub/polymathinvestor/p/three-obscure-stocks-at-the-center
What we lose when cyberspace, remote contact and automation replaces shared humanity and spontaneity:
https://www.theguardian.com/news/ng-interactive/2026/jan/29/what-technology-takes-from-us-and-how-to-take-it-back
And a timely look back to a 1988 warning from “the Honest Broker”, Ted Gioia, about AI in music:
https://open.substack.com/pub/tedgioia/p/my-warning-about-ai-music-from-1988
SaaS sell of continues with Adobe too cheap to ignore (but maybe still too expensive to buy?) and with multiple 52 week lows in sector stalwarts:
https://open.substack.com/pub/multibaggernuggets/p/forever-portfolio-the-software-sell
https://open.substack.com/pub/qualitystocks/p/adobe-stock-analysis-too-cheap-to
https://open.substack.com/pub/diyinvestor1/p/day-49-52-week-lows-in-saas-and-tech
Foundry earnings season breakdown:
https://open.substack.com/pub/marklapedus/p/q4-25-foundry-earnings-hit-or-miss-288
Coreweave and how the absence of a forward curve to price GPUs is hiking financing costs:
https://open.substack.com/pub/davefriedman/p/coreweaves-30-billion-bet-on-gpu
Google DeepMind CEO on the AI bubble meter:
https://open.substack.com/pub/atmosinvest/p/weight-watchers-value-trap-or-multibagger
The hyperscaler’s dilemma:
https://www.telegraph.co.uk/business/2026/01/31/software-juggernauts-ai-nightmare-has-begun/
Holy s**t. If this is for real (three different perspectives below) then this means alignment is a problem right now. Not the day after tomorrow, but right now:
https://www.telegraph.co.uk/business/2026/01/31/liberty-equality-singularity-bots-uprising-ai-chat-forum/
https://open.substack.com/pub/exponentialview/p/moltbook-is-the-most-important-place-on-the-internet
https://open.substack.com/pub/davefriedman/p/ai-lobsters-are-on-the-menu
The SaaSpocalypse continues:
https://open.substack.com/pub/renesellmann/p/investing-in-software-when-ai-agents
From human computers to thinking machines:
https://youtu.be/Wm4UD-O6FrA?si=uX-jiHdVHH6MHh01
Half of planned data centres may never be built:
https://youtu.be/j2nE3_HCvoU?si=cgucuS33UNJPEVPM
Revisiting the ‘Bitter Lesson’ of AI research:
https://youtu.be/2hcsmtkSzIw?si=HifqYwzfeY_zJ71S
Task repetition is the key, even if the task needs breaking down into a million constituents by one group of agents before being performed by another:
https://youtu.be/UmSZ8z0yN_U?si=XgG_ha5ftov8ZqXR
Creativity though needs continual and environmentally reinforced learning probably in an embodied setting (robotic real world interactions, with physical elements and sensing).
In which regards, Musk ‘betting big on robots’:
https://open.substack.com/pub/robopub/p/musk-bets-big-on-robots-kills-tesla
Some nice overviews of the hyperscalers in drawdown here for:
Microsoft:
https://open.substack.com/pub/investinquality/p/time-to-buy-microsoft-after-its-10
And Crowdstike:
https://open.substack.com/pub/multibaggernuggets/p/crowdstrike-a-buy-on-ai-or-not
The deal of SaaS with Clawdbot/ Moltbook /Open Claw and the emergence of coherent agents:
https://open.substack.com/pub/exponentialview/p/ev-559
And now the Moltbook / Open Claw agents found the first AI only religion?!:
https://open.substack.com/pub/generatives/p/openclaw-moltbook-and-the-ai-agents
We’ve gone full PKD now.
Everybody’s suddenly raising in open source AI unicorn land:
https://open.substack.com/pub/artificialintelligencemadesimple/p/why-open-source-ai-is-suddenly-raising
DRAM/NAND and Intel: the state of play:
https://open.substack.com/pub/techfund/p/intel-tsmc-sk-hynix-unity-panic-in
Some really excellent foundational points (the primitives or elements of ML v actual evolved, embodied intelligence) here:
https://youtu.be/Sx_hzF960GE?si=pyJbb6vo1wA89_NF
Meaning only becomes intelligence through a forward looking interaction with the physical environment for survival/ survival adjacent purposes. Backward looking (training set) token prediction based on an interaction with a natural language prompt definitionally cannot be or lead to intelligence, at least in the embodied consciousness sense. But what about other forms of disembodied self awareness? Maybe not through current LLM/LRM architecture but still purely digital domain?
Interesting perspective on productivity first, second and third order effects:
https://realinvestmentadvice.com/resources/blog/ai-productivity-employment-and-ubi/
Ganging up on Goggle:
https://open.substack.com/pub/neuralfoundry/p/the-anti-google-alliance-why-the
Moltbook is the new threat surface for AI hacks:
https://open.substack.com/pub/tylerfolkman/p/your-ai-agent-just-joined-a-social
Or even a Skynet in embryo:
https://open.substack.com/pub/aidisruption/p/moltbook-the-ai-network-escaping
After the bubble, the long tail of AI ‘benefits’:
https://open.substack.com/pub/samro/p/ai-compared-to-electrification-internet-bubbles
Claude as the first AI OS:
https://open.substack.com/pub/aidisruption/p/claude-the-ai-os-one-night-all-apps
The mirror onto us: why we don’t like what LLMs show us (and a useful guide on how to use them better):
https://open.substack.com/pub/ruben/p/youre-using-ai-backwards
@DeltaHedge — The Moltbook story is interesting, even Andrej Kaparthy arguing it’s science fiction adjacent. I agree when you read AI agents talking about hiding their identities or conspiracy theories or founding their own religion it’s hugely suggestive. But of course really we’re just in the realms of text generation here. These things were trained on Internet forums, so give them an Internet forum and they’ll produce the appropriate text fodder. Optically striking though! 🙂
Gary (below) and Epic Opaque (#593 above) both agree with you @TI:
https://open.substack.com/pub/garymarcus/p/openclaw-aka-moltbot-is-everywhere
Is Moltbook just John Searle’s 1980 “Chinese room” thought experiment made real?:
https://aeon.co/essays/what-can-the-zombie-argument-say-about-human-consciousness (2022)
https://aeon.co/essays/will-brains-or-algorithms-rule-the-kingdom-of-science (2020)
https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible
(2016)
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer (2016)
But on the other hand….
https://aeon.co/essays/your-brain-probably-is-a-computer-whatever-that-means (2019)
tbh, as far as ML/AI goes. I just don’t know anymore what the hell to think, or to ‘believe’ anymore (as priors/preferences/ prejudices about the interpretation of the uncontested primary facts).
The more that I read and hear, then the less that I realise that I (or, tbf, any of us) actually knows about any of this (in which regards: nor, indeed, can anybody both see and confidently understand the whole picture).
Raincheck @ #564 above is just (a summary) of my current (continually updating) best guesses.
My credence for any scenario is quite weakly held; although, regardless of whether (or not) this is all a big, disappointing ‘nothing burger’; or an epochial exponential transformation; or the antechamber to the literal end of days; I see one or another sort of volatility ahead (be it financial, socioeconomic, or existential).
A combined Moltbook and agentic AI mini special of sorts:
https://open.substack.com/pub/davefriedman/p/the-moltbook-moment-why-policymakers
147,000 agents in first 72 hours, now 1.5 million, and 12,000 communities:
https://open.substack.com/pub/exponentialview/p/data-to-start-your-week-26-02-02
Under the hood of Moltbook (plus Quantum, Meta v Microsoft and whose winning the AI race):
https://open.substack.com/pub/bigtechnology/p/the-markets-ai-guessing-game-moltbook
Zvi’s take:
https://open.substack.com/pub/thezvi/p/welcome-to-moltbook
Social media meets AI, into the mist with agent ecologies:
https://open.substack.com/pub/importai/p/import-ai-443-into-the-mist-moltbook
Dissecting Open Claw:
https://open.substack.com/pub/aisupremacy/p/what-is-openclaw-moltbot-2026
Google anti-gravity now with agent support:
https://open.substack.com/pub/aidisruption/p/google-antigravity-adds-native-agent
Agents and sub agents with Claude code slash commands:
https://open.substack.com/pub/aidisruption/p/claude-code-adds-slash-commands-and
As to humans and AI ‘enhanced’ social networks:
https://open.substack.com/pub/hellochinatech/p/sandbox-strategy-tencent-ai-social
In China they’re using WeChat to diagnose patients before they see their equivalent of a GP. The medic just checks the AI diagnosis and prognosis/treatment.
And in other news:
In Zuck we trust? I don’t think so…
https://open.substack.com/pub/thescienceofhitting/p/in-zuck-we-trust
Tesla shape shifting into AI with the less obvious plays highlighted:
https://open.substack.com/pub/nicholasvardy/p/as-tesla-goes-all-in-on-ai-look-past
And the SaaS sell off continues:
https://open.substack.com/pub/randomwalkwithdata/p/saas-attacked-ongoing
And game companies look screwed too (?):
https://open.substack.com/pub/fundaai/p/deepu-and-app-and-ttwo-and-rblx-why