≡ Menu

Weekend reading: First they came for the call centres

Our Weekend Reading logo

What caught my eye this week.

Bad news! Not only are the machines now coming from our cushy brain-based desk jobs, but our best response will be to hug it out.

At least that’s one takeaway from a report in the Financial Times this week on what kinds of jobs have done well as workplaces have become ever more touchy-feely – and thus which will best survive any Artificial Intelligence takeover.

The FT article (no paywall) cites research showing that over the past 20 years:

…machines and global trade replaced rote tasks that could be coded and scripted, like punching holes in sheets of metal, routing telephone calls or transcribing doctor’s notes.

Work that was left catered to a narrow group of people with expertise and advanced training, such as doctors, software engineers or college professors, and armies of people who could do hands-on service work with little training, like manicurists, coffee baristas or bartenders.

This trend will continue as AI begins to climb the food chain. But the final outcome – as explored by the FT – remains an open question.

Will AI make our more mediocre workers more competent?

Or will it simply make more competent workers jobless?

Enter The Matrix

I’ve been including AI links in Weekend Reading for a couple of years now. Rarely to any comment from readers!

Yet I continue to feature them because – like the environmental issues – I think AI is sure to be pivotal in how our future prosperity plays out. For good or ill, and potentially overwhelming our personal financial plans.

The rapid advance of AI since 2016 had been a little side-interest for me, which I discussed elsewhere on the Web and with nerdy friends in real-life.

I’d been an optimist, albeit I used to tease my chums that it’d soon do them out of a coding job (whilst also simultaneously being far too optimistic about the imminent arrival of self-driving cars.)

But the arrival of ChatGPT was a step-change. AI risks now looked existential. Both at the highest level – the Terminator scenario – and at the more prosaic end, where it might just do us all out of gainful employment.

True, as the AI researchers have basically told us (see The Atlantic link below) there’s not much we can do about it anyway.

The Large Language Models driving today’s advances in AI may cap out soon due to energy constraints, or they may be the seeds of a super-intelligence. But nobody can stop progress.

What we must all appreciate though is that something is happening.

It’s not hype. Or at least for sure the spending isn’t.

Ex Machina

Anyone who was around in the 1990s will remember how business suddenly got religion at the end of that decade about the Internet.

This is now happening with AI:

Source: TKer

And it’s not only talk, there’s massive spending behind it:

Source: TKer

I’ve been playing with a theory that one reason the so-called ‘hyper-scalers’ – basically the FAANGs that don’t make cars, so Amazon, Google, Facebook et al – and other US tech giants are so profitable despite their size, continued growth, and 2022-2023 layoffs, is because they have been first to deploy AI in force.

If that’s true it could be an ominous sign for workers – but positive for productivity and profit margins.

Recent results from Facebook (aka Meta) put hole in this thesis, however. The spending and investment is there. But management couldn’t point to much in the way of a return. Except perhaps the renewed lethality of its ad-targeting algorithms, despite Apple and Google having crimped the use of cookies.

Blade stunner

For now the one company we can be sure is making unbelievable profits from AI is the chipmaker Nvidia:

Source: Axios

Which further begs the question of whether far from being overvalued, the US tech giants are still must-owns as AI rolls out across the corporate world.

If so, the silver lining to their dominance in the indices is most passive investors have a chunky exposure to them anyway. Global tracker ETFs are now about two-thirds in US stocks. And the US indices are heavily tech-orientated.

But should active investors try to up that allocation still further?

In thinking about this, it’s hard not to return to where I started: the Dotcom boom. Which of course ended in a bust.

John Reckenthaler of Morningstar had a similar thought. And so he went back to see what happened to a Dotcom enthusiast who went-all in on that tech boom in 1999.

Not surprisingly given the tech market meltdown that began scarcely 12 months later, the long-term results are not pretty. Bad, in fact, if you didn’t happen to buy and hold Amazon, as it was one of the few Dotcoms that ultimately delivered the goods.

Without Amazon you lagged the market, though you did beat inflation.

And yet the Internet has ended up all around us. It really did change our world.

Thematic investing is hard!

I wouldn’t want to be without exposure to tech stocks, given how everything is up in the air. Better I own the robots than someone else if they’re really coming for my job.

But beware being too human in your over-enthusiasm when it comes to your portfolio.

The game has barely begun and we don’t yet know who will win or lose. The Dotcom crash taught us that, at least.

Have a great weekend!

From Monevator

Does gold improve portfolio returns? – Monevator [Members]

How a mortgage hedges against inflation – Monevator

From the archive-ator: How gold is taxed – Monevator

News

Note: Some links are Google search results – in PC/desktop view click through to read the article. Try privacy/incognito mode to avoid cookies. Consider subscribing to sites you visit a lot.

UK inflation rate falls to lowest level in almost three years – BBC

Energy price cap will drop by 7% from July [to £1,568]Ofgem

House prices are modestly rising, driven by 17% annual spike in new build values – T.I.M.

Hargreaves Lansdown rejects £4.7bn takeover approach – This Is Money

Judge: Craig Wright forged documents on ‘grand scale’ to support Bitcoin lie – Ars Technica

FCA boss threatens private equity with regulator clampdown – CityAM

Sunak says it’s 4th July, in the rain, against a subversive soundtrack [Iconic]YouTube

Sir Jim Ratcliffe scolds Tories over handling of economy and immigration after Brexit – Sky

No, it’s not all the Tories’ fault… but Sunak and Hunt were too little, too late – Bloomberg

Products and services

Pay attention to catches as well as carrots when switching bank accounts – Guardian

Which energy firm offers the cheapest way to get a heat pump? – T.I.M.

How to get the most from second-hand charity shops – Which

Get £200 cashback with an Interactive Investor SIPP. New customers only. Minimum £15,000 account size. Terms apply – Interactive Investor

Nine out of ten savings accounts now beat inflation – This Is Money

Problems when transferring a cash ISA – Be Clever With Your Cash

Nationwide launches a trio of member deals worth up to £300 – Which

Transfer your ISA to InvestEngine by 31 May and you could get up to £2,500 as a cashback bonus (T&Cs apply. Capital at risk) – InvestEngine

Seven sneaky clauses in estate agent contracts that can cost you dear – This Is Money

Halifax Reward multiple account hack: worth up to £360 a year – Be Clever With Your Cash

Hidden homes in England and Wales for sale, in pictures – Guardian

Comment and opinion

No, the stock market is not rigged against the little guy – A.W.O.C.S.

The life hedge… – We’re Gonna Get Those Bastards

…is easier said than implemented [US, nerdy]Random Roger

Checking out a fake Ray Dalio Instagram investing scam – Sherwood

An open letter to Vanguard’s new CEO – Echo Beach

If you look past the headlines, London is charging ahead – CityAM

Most of us have too much in bonds [Search result]FT

Why we still believe in gold – Unherd

Are ‘fallen angel’ high-yield bonds the last free lunch in investing? – Morningstar

For love or money – Humble Dollar

Naughty corner: Active antics

Fund manager warns putting £20k in the US now will [possibly!] lose you almost £8k – Trustnet

A deep dive into US inflation, interest rates, and the US economy – Calafia Beach Pundit

A tool for testing investor confidence – Behavioural Investment

When to use covered call options – Fortunes & Frictions

Valuing Close Brothers after the dividend suspension – UK Dividend Stocks

Meme stock mania has entered its postmodern phase [I’m editorialising!]Sherwood

Kindle book bargains

Bust?: Saving the Economy, Democracy, and Our Sanity by Robert Peston – £0.99 on Kindle

Number Go Up by Zeke Faux – £0.99 on Kindle

How to Own the World by Andrew Craig – £0.99 on Kindle

The Great Post Office Scandal by Nick Wallis – £0.99 on Kindle

Environmental factors

Taking the temperature of your green portfolio [Search result]FT

The Himalayan village forced to relocate – BBC

‘Never-ending’ UK rain made 10 times more likely by climate crisis, study says – Guardian

So long triploids, hello creamy oysters – Hakai

Robot overlord roundup

We’ll need a universal basic income: AI ‘godfather’ – BBC

Google’s AI search results are already getting ads – The Verge

AI engineer pay hits $300,000 in the US – Sherwood

With the ScarJo rift, OpenAI just gave the entire game away – The Atlantic [h/t Abnormal Returns]

Perspective mini-special

How much is a memory worth? – Mike Troxell

We are all surrounded by immense wealth – Raptitude

How to blow up your portfolio in six minutes – A Teachable Moment

My death odyssey – Humble Dollar

Off our beat

The ultimate life coach – Mr Money Mustache

How to cultivate taste in the age of algorithms – Behavioural Scientist

Trump scams the people who trust him – Slow Boring

Buying London is grotesque TV, but it reflects the capital’s property market – Guardian

The algorithmic radicalisation of Taylor Swift – The Atlantic via MSN

And finally…

“Three simple rules – pay less, diversify more and be contrarian – will serve almost everyone well.”
– John Kay, The Long and the Short of It

Like these links? Subscribe to get them every Friday. Note this article includes affiliate links, such as from Amazon and Interactive Investor.

{ 118 comments… add one }
  • 1 Marco May 25, 2024, 11:52 am

    Yay, I actually guessed that the “bearish on US equity” fund manager was Hussman before reading the article. The guy has made a living out of being an uber perma bear for the last 15 years. He’s probably due a right call?

  • 2 Paul_a38 May 25, 2024, 12:30 pm

    Thanks for the article, thought you might have the holiday weekend off.
    OK, ESG has exhausted it’s sellside utility it seems, now replaced by AI. Can’t get excited, it’s still only software and I reckon it will choke on its own errors as they cascade. Big problem will be legal, how to have a known assured trail untainted by AI. So AI will be a source of data pollution which may cause a few steps backward until The Purge.
    As for the Internet, a bunch of wires and dumb computers until along came the search engine doing what Morse did for the telegraph.

  • 3 dearieme May 25, 2024, 12:42 pm

    Decades ago I worked around the corner from a university department of Machine Intelligence, as AI was then called. The prima donnas who ran the department promised imminent society-changing revolution.

    I had a beer with one of their bright young men. He told me there were two deep problems with it all. (i) They didn’t know how to make computers emulate human decision-making. (ii) They didn’t know if it would be sensible even to try to emulate human decision-making.

    Much more computing power is available nowadays but are there yet answers to his two points?

  • 4 xxd09 May 25, 2024, 12:46 pm

    In the distant past during my long learning evolution as an investor J.Hussman appeared on my investing information radar and I read his posts for some time.They were a rather wonderful antidote to the eternal optimists who far outnumbered him at that time
    I could never understand how such a depressing investing outlook could appeal to so many punters but it did and does as he is still in business
    He must appeal to the many “end of the world “ guys and gals of whom we do seem to have a preponderance of at the moment-climate change,rising Co2 levels,rising sea levels (a change from the Bomb when I was a boy) etc etc
    Re AI takeovers -I see 2 models .AI for the mundane routine procedures of living but as social human beings we like to smell and interact with others of our species and that sort of particular service model cannot be duplicated unless you prefer robots
    I notice small retailers in my area still apparently making a living against the Tesco,s etc using personal service,polite helpful staff ie humanoid social interaction as a selling point-seems to wrk-so far!
    xxd09

  • 5 ermine May 25, 2024, 1:19 pm

    Re AI, well, it sure ain’t improving Web search any. At the moment AI seems to be busy enshittifying much. Save us from AI ‘art’ in the hands of tyros, this is one bunch of mediocre workers not being made more competent.

    While the dotcom times spring to mind, perhaps that’s recency bias. We’ve seen this movie before, with the railways and possibly electricity. It’s hardly as if electrickery, trains and t’internet have died out in common usage, but the investors were the fall guys for sussing out which ideas worked and which didn’t.

    Many are called, few are chosen 😉

    Ah, AI index funds you say? I used to hold TMT back in them dotcom days. Not TMT investments, but these guys, the iShares TMT ETF, which was later eradicated on the QT during the bust. Who did well out of the dot-com boom? Our old friend Warren Buffett, who studiously ignored it all and bought non dotcommy stuff while everyone was chasing lastminute.com

  • 6 xalion May 25, 2024, 1:41 pm

    GPUs are expensive to buy and the power cost of processing a query using one of the language models is a lot higher than a simple Google search. Unless the business model allows more $ to be received as a result, the incremental impact to profit margins & returns on capital is negative. There are competitive pressures forcing tech companies down this road, not sure it makes them better investments.

    There’s a Buffett anecdote to the effect that everyone at sports match starts seated, the front row stands up to get a better view, causing a cascade effect for the rows behind, the end result being that no one can see any better, but all are much less comfortable.

  • 7 xalion May 25, 2024, 1:48 pm

    It’s also rare to find relative value in an area which has attracted lots of excitement from investors who have bid up the prices, without having much clue to who the winners will be and whether the prize is boobytrapped.

  • 8 Mei May 25, 2024, 5:21 pm

    Interesting article. I do use Gemini (google chatGPT) but only for limited examples such as writing a letter. Any thing that contain technical detail should be checked by human. For example, the math problem generated to educate a kid has errors in it. I doubt it could be used for anything serious in near future.

    I guess to buy Nivida or not is another question. It’s speculation.

  • 9 Ben May 25, 2024, 6:05 pm

    There’s definitely potential in AI, as the Aphafold protein structure project by Deepmind shows. But there is also a vast element of hype clouding the picture – like dot-com, Blockchain, ETFs… A big crash will come before we know where the real potential is. LLMs are impressive but unreliable and stupid, perhaps permanently so

  • 10 Ben May 25, 2024, 7:12 pm

    Scuse typo – NFTs, not ETFs

  • 11 Delta Hedge May 25, 2024, 10:31 pm

    Three different comments if I may:

    1. On @TI’s question: “But should active investors try to up that allocation still further?”: I’d suggest thinking in bets. It could be different this time, but (I’d guess) on 80% of occasions it isn’t, and mean reversion occurs. But you can bet on both horses by ‘barbelling’ the portfolio. Or at least you could if we were in the US and had access to their ETF universe (boo hiss to PRIIPs reg’s and MIFID which stop us getting access to these products here).

    Between them the WisdomTree Global Megatrends Equity index US ETF and the Invesco S&P 500 Momentum US ETF, SPYO, would enable one to concentrate the half of the bet going into the ‘AI wins’ scenario; whilst the other half goes into an ‘AI loses’ allocation to ex-US developed market and Emerging Market Small Cap Value (i.e. the opposite).

    The SPYO ETF is especially interesting here as it solely comprises those S&P 500 constituents with the highest risk-adjusted price return momentum scores adjusted by their cap weights.

    Constituents are weighted by the product of market cap and momentum score, subject to constraints; namely that the maximum weight of each security is the lower of 9% and 3 times its market cap share in the S&P 500.

    When the index is concentrating into the biggest names it becomes very hard indeed to beat it with any form of active stock selection.

    When £1 is invested in the S&P 500 index, 35p now flows into the top 10 stocks. The remaining 65p is divided between the remaining 490 stocks. So investors risk failing unless 35% of their portfolio is invested in those 10 stocks.

    Bill Miller from Legg Mason and Peter Lynch at Fidelity each achieved 15 years beating the S&P 500. Then their strategies ran into headwinds and stopped working. But markets were less efficient and less concentrated then. According to one calculation, at the current concentration levels in the top 10 stocks, basically the hyper-scalers which @TI refers to above; the probability of an active manager (before any fees and transaction costs) beating the S&P 500 for 15 consecutive years after doing so in the first year, is now just 1 in 2.3 million

    2. @ermine #5: the regular Google search product is now irredeemably enshitified for sure, but their Perplexity AI does a pretty decent job as a universal summariser, and better I’d say than Chat GPT 3.5 (comparing free with free versions), even if it’s no form of any type of intelligence; just a better ‘dumb rock’ stochastic parrot. Here’s the answer it gave me to the query as to what effect a declining and aging population would have on both the equity risk premium and the risk free rate: “Based upon the provided search results, a declining and aging population is expected to increase the equity risk premium but decrease risk-free rates. The key points from the sources are: As the population ages, there will be a larger proportion of older, risk-averse investors shifting their portfolios towards safer assets like bonds, increasing demand for risk-free government bonds. This increased demand for bonds will drive down the risk-free rate of return. However, the increased risk aversion and lower risk tolerance of an aging population will reduce overall demand for risky assets like equities. Lower demand for equities will decrease equity prices and increase the expected equity risk premium. Specifically, some sources predict the equity risk premium will increase by around 0.28 percentage points due to demographic shifts, while the risk-free rate will decrease by around 1.5 percentage points. Sources also link population aging to a decline in safe real interest rates (risk-free rate) but a rise in equity risk premia. So in summary, the consensus from the provided literature is that an aging population will put upward pressure on the equity risk premium by reducing demand for risky assets, while simultaneously depressing the risk-free rate by increasing demand for safer government bonds.” I thought that was a pretty good first attempt answer for a machine.

    3. The Tony Isola’s ‘A Teachable Moment’ piece in the Weekend links perpetuates a common misconception in his erroneous statement: “This capacity is enough to obliterate the planet several times over”. We are not six minutes away from the extinction of the human species, and we never have been. Dr. Brian Martin is a peace and disarmament activist in Australia, but also a social scientist commited to accuracy. Amongst many, he has pointed out that, whilst the effects would be quite horrifically devastating to the combatant countries, they didn’t (and still don’t) threaten human extinction. Here he is writing at the height of the Cold War in December 1982, when there were several times the number of weapons (both stockpiled and readied on launch on warning) as compared to now:

    https://www.bmartin.cc/pubs/82cab/index.html

    And this is a credible worst possible case scenario (from an alternative history perspective) written in 2003, and set in August 1988, at just about the worst possible time for an exchange to take place. It’s bad, but no extinction risk.

    https://www.johnstonsarchive.net/nuclear/nuclearwar1.html

  • 12 ermine May 25, 2024, 11:48 pm

    @Delta Hedge #11 I will preface the following with the fact I am an old git and perhaps resistant to change, but I replicated your query on perplexity re ageing and got the same result, which is nice. Do I feel I have learned something? Not really, it’s basically the outcome what was coded in the old rule of thumb take your age from 100, invest in equities, rest in bonds. Extrapolate that with supply and demand, job done.

    I asked it to tell me about stoats, and I would have been far better off with the Wikipedia entry, perplexity also favoured the negative press from New Zealand. Seriously New Zealanders, the most invasive species to NZ ain’t got four legs. And compared to the rest of the world colonised by mustelids, NZ is a pimple.

    Perplexity’s got the same problem as AI art. It looks impressive – take this William Morris pastche, but it reeks, in a curiously undefinable way that I have gotten to hate over the last three months. And I was an engineer, I have virtually zero artistic talent but I can see what’s wrong. If an article has a banner pic that’s AI I don’t bother to read it.

    I’m sure it will improve, and we will learn better to use it properly, Edison’s cylinder phonograph and my hifi are a long way apart too. The protein folding stuff is amazing, a genuine advance that we may be grateful for with new drugs. But above all else, in the information space current AI is unsatisfying and a major pollutant.

    The essential problem seems to be that it’s artificial, but it’s not intelligent. And it seems to make a particular type of human’s brain fall out in the admiration of what it can do that we can’t without acknowledging the converse – it’s not superhuman and it’s causing us to devalue what is human.

  • 13 Ducknald Don May 26, 2024, 1:31 pm

    Nice to see the likes of Jim Ratcliffe still can’t bring themselves to say they were wrong.

    On the subject of AI it will be interesting to see if the results can improve without the energy costs going through the roof. I’m impressed with what I’ve seen so far but sceptical of the overall benefits, in particular because it’s big tech that seems most likely to reap the rewards.

  • 14 Boltt May 26, 2024, 1:46 pm

    The 2 most impressive AI things I’ve read are:

    1- identifying sex (not gender) from IRIS 99.88% accuracy

    2- identify different fingerprints as being from same person 77% accuarcy

    Although I only just found out it wasn’t 100%, but clever either way

  • 15 Delta Hedge May 26, 2024, 3:29 pm

    @ermine #12: Gary Marcus’ Substack is a good place to get some constructive informed skepticism about AGI/ASI generally, and about LLMs in particular.

    The big questions for me here are:

    a). Is ASI merely difficult but, in principle, within reach (whether over long or, less plausibly, short time scales)?

    Or:

    b). Is ASI just a dream, akin to wishing for magic, where physical impossibility meets the human need to imagine something lying beyond the possible, like each of:

    – Backwards in time travel (e.g. Tipler cylinders):

    https://en.m.wikipedia.org/wiki/Tipler_cylinder

    – Faster than light travel (e.g. Alcubierre drive):

    https://youtu.be/SBBWJ_c8piM?si=BlR3ze8en6tdEp-G

    – FTL communication (e.g. using quantum entanglement):

    https://youtu.be/BLqk7uaENAY?feature=shared

    If ASI & AGI are phantasms of imagination and outside the realm of the possible, like each of the above examples are, then anything more than a zero allocation to their commercial realisation would be excessive.

    But if AGI and (ultimately perhaps even) ASI are merely very hard, but not actually impossible to achieve (notwithstanding many incremental S curves of break through & adoption might be required over a long time rather than rapidly reaching a much hyped technological singularity); then there is at least some reason behind the current surge in investment linked to trying to realise these goals.

    However, even then, the possibility of disappointments and delays would still be very substantial indeed. As with the TMT bubble of 1995-1999, even where a technology does ultimately more or less deliver as originally promised, the value of the companies which were built upon it can still crash miserably in the near to medium term if the pace of progress falls behind inflating and accelerating investor expectations.

  • 16 Alan S May 26, 2024, 6:39 pm

    @Delta Hedge (#11) – comment 3

    Interesting recent analysis of the effects of nuclear war in Nature (https://www.nature.com/articles/s43016-022-00573-0)

    Not pretty reading – the 5 Billion estimate (after a large exchange) is about 60% of the world’s population.

    Declassified estimates of casualty rates from the 1950s-1970s can be found at https://thebulletin.org/2023/01/cold-war-estimates-of-deaths-in-nuclear-conflict/ and seem to lie around the 50% mark in preemptive attacks or after 30 days. Of course, these are rates in the countries involved.

  • 17 Delta Hedge May 26, 2024, 11:05 pm

    Thanks @Alan S #16.

    The 5 bn figure relies on a full blown, very long lasting and very severe global nuclear winter. Without that it’s topping out at a loss of 360 mn people (which is of course horrific) or 4.5% of the current world population of 8 bn.

    To clarify, I don’t necessarily disagree with the possibility of global nuclear winter, nor with the general thrust of the concern expressed in Annie Jacobsen’s book, which Tony Isola references (and which I’ve read, cheapskate that I am, pursuing it whilst in Waterstones 😉 )

    Similar to last week’s Weekend reading comment #14 by @BarryGevenson that, “90% of life on this planet will be dead in 150 years”; my only objection here is on the factual inaccuracy of Tony Isola’s statement that: “This capacity is enough to obliterate the planet several times over.” That’s a strong, emphatic but incorrect claim.

    In fairness, he’s an excellent financial blogger, and he’s relying here on Jacobsen’s otherwise superbly presented and quite credible book, but also one which seems to me to vere off right at the end and – after outlining a well researched, well crafted and detailed scenario – then appears to go hyperbolic in its concluding pages and appear to suggest that much of the world would be uninhabitable to humans for 25,000 years.

    There’s 40 years of controversy here (starting in 1982 with “Nuclear War: The Aftermath” in Ambio, published by on behalf of Royal Swedish Academy of Sciences); but not even the most ardent advocates for taking the severity of nuclear winter seriously, and not even the most severe models, predict human extinction – except as the most vanishing remote possibility.

    As might be expected, in recent years the EA and LessWrong community has been active in both quantitatively probing the models and in reassessing the risks within numerical parameters, see as examples:

    https://forum.effectivealtruism.org/posts/pbMfYGjBqrhmmmDSo/nuclear-winter-reviewing-the-evidence-the-complexities-and

    https://forum.effectivealtruism.org/posts/6KNSCxsTAh7wCoHko/nuclear-war-tail-risk-has-been-exaggerated

    https://www.lesswrong.com/posts/sT6NxFxso6Z9xjS7o/nuclear-war-is-unlikely-to-cause-human-extinction

    The second and third of the above respectively note of the Robock study (which kicked off the modern, post Cold War, series of models on this subject):

    – “Luke Oman, one of the 3 authors of Robock 2007, having guessed a risk of human extinction of 0.001 % to 0.01 % for an injection of soot into the stratosphere of 150 Tg.” [150 teragrammes of soot being the worst case in an all out exchange in Robock’s already very pessimistic study].

    – “Carl Shulman asked one of the authors of this paper, Luke Oman, his probability that the 150Tg nuclear winter scenario discussed in the paper would result in human extinction, the answer he gave was “in the range of 1 in 10,000 to 1 in 100,000.””

    The actual reasoning of Luke Oman here – as one of the most prominent advocates of the possibility of severe nuclear winter – is then set out in his Q&A at:

    https://www.overcomingbias.com/p/nuclear-winter-and-human-extinction-qa-with-luke-omanhtml

    Human extinction risk is the existential dread which Tony Isola seems to fear in his piece, which is linked to in this week’s Weekend reading.

    But, whilst the loss of (at most) between 360 mn and 5 bn lives amongst the 8 bn human living would be an unimaginable tragedy and an unprecedented disaster; it would not be extinction.

    Extinction forecloses the lives of everyone who might otherwise live. That could be a lot of people.

    If you very conservatively assume a future average human (and human descended) population size of 1 bn people (i.e. only an eighth of the current world population size) with typical lifespans of a century, and then allow that the Earth will remain habitable for between 500 mn to 1.3 bn years but that natural mass extinction level events seem to occur every 100 mn to 500 mn years, then extinction now would foreclose the possibility of at least a quadrillion (i.e. a 1,000 tn) future human lives.

    This is why the loss of all 8 bn people alive now is likely to be at least a million times worse than the loss of 7 bn out of 8 bn people alive, and not just 14% worse.

    Fortunately, what Tony Isola seems to fear in his piece, namely actual extinction of the human species, just isn’t going to arise out of this particular risk vector.

    And there are plenty of reasonable grounds to doubt whether even an non-extinction level global nuclear winter scenario would eventuate.

    In 1991 it was claimed that the Kuwaiti oil well fires might cause a global winter and lead to famine in Asia (Peter Aldous, January 10, 1991, “Oil-well climate catastrophe”, Nature, 349 (6305): 96, “The fears expressed last week centred around the cloud of soot that would result if Kuwait’s oil wells were set alight by Iraqi forces … with effects similar to those of the “nuclear winter … Paul Crutzen, from the Max Planck Institute for Chemistry in Mainz, has produced some rough calculations which predict a cloud of soot covering half of the Northern Hemisphere within 100 days. Crutzen … estimates that temperatures beneath such a cloud could be reduced by 5–10 degrees C”). Those concerns turned out completely misplaced. There were only very localised and minimal cooling effects.

    With the best of intentions and belief, the late great Carl Sagan and his colleagues sought in the 1980s to draw attention to nuclear winter risk. Where it seems, based upon the empirical evidence that we now have the benefit of, that they probably went wrong was that they expected that a self-lofting of the sooty smoke would occur when it absorbed the sun’s heat radiation, whereby the black particles of soot would be heated by the sun and lofted higher and higher into the air, thereby injecting the soot into the stratosphere where it would take years for the sun blocking effect of this aerosol of soot to fall out of the air, and with that, catastrophic ground level cooling and agricultural impact. Instead it seems more likely now that this soot wouldn’t self loft to high enough altitude and instead would get fairly rapidly washed out by rainfall.

  • 18 Al Cam May 27, 2024, 10:22 am

    @Delta Hedge (#17):
    Re: “With the best of intentions and belief, …”
    A somewhat extreme example of the sensitivity of a model to the underlying assumptions! Thanks for the info.

  • 19 Alan S May 27, 2024, 10:24 am

    @Delta Hedge (#17). Thanks – there’s some interesting reading in the links you’ve given there. Let’s hope the calculations remain theoretical.

    So, to stay at least vaguely on topic (and very much tongue in cheek) – would bonds, equities, or commodities do best during ‘nuclear winter’?

    I suspect “Similar to last week’s Weekend reading comment #14 by @BarryGevenson that, “90% of life on this planet will be dead in 150 years”
    was referring to the potential outcomes of climate change where currently about 250k additional human deaths per year are predicted (WHO) in areas likely to be particularly hard hit. For other species, potential extinction rates have large error bars (for comparison, there was about 75% species loss when the dinosaurs got ‘zapped’, but that was a bigger event).

  • 20 Marked May 27, 2024, 11:14 am

    So N in FAANG replaces Netflix with Nvidia?

    After results this week (2.3Trn market cap prior to results) the day later added near $250Bn – more than UK’s biggest company…. in a day!

    Comes back to company is worth what a person is prepared to pay. That mid 70’s profit margin must come under attack soon you’d hope.

  • 21 Delta Hedge May 27, 2024, 11:30 am

    @Alan S #19: “would bonds, equities or commodities do best”: Benzinga markets says that, come the AI or other apocalypse, invest in a lifestraw, and not gold or BTC:
    https://www.benzinga.com/markets/cryptocurrency/24/05/38821153/bitcoin-and-gold-wont-save-you

    James Altucher has written the book on crisis investing: “The Wall Street Journal Guide to Investing in the Apocalypse: Make Money by Seeing Opportunity Where Others See Peril (Wall Street Journal Guides)”:
    https://www.amazon.com/exec/obidos/ASIN/0062001329/thebigpictu09-20

    And Michael Batnick at Ritholtz Wealth Management says to just carry on as normal 😉 :
    https://www.theirrelevantinvestor.com/p/the-thing-that-doesnt-mix-well-with-investing

  • 22 Al Cam May 28, 2024, 6:38 am

    @Alan S (#19):
    Re: “would bonds, equities or commodities do best during ‘nuclear winter’?”

    William Bernstein concedes there is not much most people can do to protect against confiscation and devastation risks ‘beyond [having] an interstellar spacecraft’. Maybe this is what really motivates Musk, Bezos, Branson, etc see e.g. https://spaceimpulse.com/2023/03/09/new-space-companies/

  • 23 weenie May 28, 2024, 10:47 am

    Interesting to read about covered call options – when done properly, they are indeed the right tool for the right investor and I personally know at least one person who lives off his options trading income.

    My own foray into options trading has me ‘technically’ selling covered put options, as opposed to the selling covered call options strategy explained in the article – it’s just the flip side and it’s currently working for me.

    Probably still too soon to say it’s the right tool for the right investor in my case though!

  • 24 Delta Hedge May 28, 2024, 11:51 am

    Superb point @weenie (#23).

    Linking your thoughts above to both the “life hedge” (WGGTB) and the Random Roger pieces in the links: I wonder if:

    – for an investor coming towards retirement & wanting to derisk;
    – could they sell covered calls on their equity holdings (say a global equity tracker); and,
    – instead of going immediately into the types of counter equity cycle investments that Random Roger covers that are meant to rise a bit when equities fall;
    – they could use the sale proceeds of the call premia received to buy OTM puts on the same global equity index;
    – so that if the worse happens, and equities plunge, then their ‘crash insurance’ is paid for by selling the calls.

    In this scenario, if equity markets surge, and the calls get called, then the investor just sells the equity portfolio to the call option buyer at the strike price, which fits in quite well with the investor derisking from equities to either (or both) of bonds and/or the alternative types of investments that were covered by Random Roger last week.

  • 25 Delta Hedge May 31, 2024, 6:19 pm

    Nice summary here from a Wharton Business School Prof. of 4 potential AI economic pathways and of LLM’s modus operandi:

    https://youtu.be/d4f1jqb3Yis?feature=shared

    Meanwhile AI cheerleader in chief and technological singularity soothsayer Ray Kurzweil has a soon to be published follow up to his 2005 (slightly bonkers) foundational AI/AGI/ASI classic “The Singularity Is Near:

    https://www.penguin.co.uk/books/462759/the-singularity-is-nearer-by-kurzweil-ray/9781847928290

    And former Guardian writer, part time Buddhist, and Bali based digital nomad and sci-fi commentator Damian Walter has an intriguing (anarcho-capitalist, libertarian-socialist, hybrid mash up?) take on the potentials and perils of AI. The 2nd of these is quite long. I listen to these at 2x speed on YouTube. The key question in the second is from 7 to 14 minutes in.

    Is it going to be a utopia in the mould of Ian M Banks’ Culture; or instead either one of William Gibson’s, Philip K Dick’s or Aldous Huxley’s dystopias or, worst still, one of the worlds which Frank Herbert or George Orwell warned against:

    https://youtu.be/iVd1hPewcCw?feature=shared

    https://youtu.be/uGZW1xnkzkI?feature=shared

  • 26 Delta Hedge June 1, 2024, 9:28 am

    Also recommend this Prof. Stuart Russell talk on AI last month at the Neubauer Collegium at the University of Chicago:

    https://youtu.be/UvvdFZkhhqE?si=3MWUipNKCR-8ryVv

    And this interview with him, also from last month, at the Cal Alumni of UC Berkley:

    https://youtu.be/QEGjCcU0FLs?si=Ey2iw3JO8om3Jw0I

    Stuart Russell together with Prof. Geoffrey Hinton are the acknowledged ‘Godfathers of AI’. It’s fair to say that they’re both rather concerned on the safety front. There are a great many talks by each of them out there, but these two items seems to be the most recent in this very fast moving area by Prof. Russell.

  • 27 Delta Hedge June 14, 2024, 7:21 pm

    Interesting take on the use of AI in investing:

    https://www.telegraph.co.uk/business/2024/06/13/ai-better-investment-decisions-humans/

    And this one on why maybe the technological singularity really is near to hand:

    https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf

  • 28 The Investor June 14, 2024, 10:41 pm

    @Delta Hedge — Yes, the second link is a bit of a sobering read, isn’t it, particularly if one notches up its credibility a couple of ticks due to the author. It hit just when I was calming down about ChatGPT having had play with a bunch of induced errors the other day. Perhaps I’ll belatedly include it in the links this week.

  • 29 The Investor June 14, 2024, 11:31 pm

    p.s. On the other hand, one of my friends who works in AI says not so fast, sent me this link:

    https://www.youtube.com/watch?v=xm1B3Y3ypoE

  • 30 Delta Hedge June 20, 2024, 11:30 pm

    More sobering takes on an accelerated timeline from ML/LLM scaling to AGI to ASI:

    https://open.substack.com/pub/unchartedterritories/p/what-would-you-do-if-you-had-8-years

    tbh I don’t know whether to be terrified, anxious or dismissive.

    Instinctively, I favour the precautionary principle until potentially catastrophic tech/ideas are proven reasonably safe in all plausible scenarios.

    We now know that, for examples, that both GM foods and nuclear power are safe by all reasonable and fair minded definitions (not completely safe and absolutely free from adverse consequences to be sure; but that’s an unreasonable standard for technology conferring significant global benefits).

    ASI might turn out likewise to be safe for all practical purposes, if it does emerge soon.

    But until we actually know, we should tread very carefully.

    You wouldn’t rush to make an investment decision involving your entire portfolio. So too we should pause, reflect, test and assure.

  • 31 Delta Hedge July 4, 2024, 1:36 pm

    [Correction: my reference to the SPYO ETF at #11 should have been to the SPMO ETF]

  • 32 Delta Hedge July 17, 2024, 10:11 pm

    Latest astounding price target and market cap forecast for Nvidia – but it’s not one from Cathy Wood for a change. It’s from James Anderson, formerly of Scottish Mortgage fame:

    https://fortune.com/2024/07/16/nvidia-market-cap-50-trillion-investor-james-anderson-amazon-tesla/

  • 33 Delta Hedge July 27, 2024, 9:59 pm

    I’m posting this here because there’s some obvious implications for investors now if the LLM scaling to narrow AI to AGI to ASI race (and hype) is not merely trying to ascend a much steeper and far higher mountain than Open AI, Anthropic, xAI, Meta, Gemini et al are planning for; but, instead, is actually in a scenario where all the efforts are being directed at the wrong mountain and where the right mountain to climb is effectively impossibly tall and steep.

    So, for over 30 years Sir Roger Penrose, the 1988 Wolf Prize in Physics winner, which he shared with Stephen Hawking for the Penrose–Hawking singularity theorems, and the 2020 Nobel Prize in Physics laureate; has had a highly unorthodox and contentious conjecture about how consciousness arises in the brain.

    He thinks that consciousness could be a quantum process (orchestrated objective reduction) involving structures common in neurones called microtubules.

    Apart from the anaesthesiologist Stuart Hameroff, almost noone else has taken the idea seriously as it would require quantum superpositions in the microtubules in the warmth of the brain to be sustained for many orders of magnitude longer than the wave function collapse takes in the cold (near absolute zero temperature) environments of quantum computers for qubits with only a few particles (decoherence takes no more than an attosecond in non pristine environments, far quicker than any brain processes, like a neurone firings).

    There’s several other technical and fundamental objections to the idea and, so far at least, it just hasn’t garnered any enthusiasm from the physics, neurology or computation professions trying to understand the hard problem of consciousness.

    Anyway, there’s been something of a breakthrough recently covered very clearly here:

    https://youtu.be/xa2Kpkksf3k?feature=shared

    The upshot for AI research/development is that, if Penrose is right, any classically algorithmic process cannot lead to consciousness and that human like (or super human) intelligence cannot emerge from all current approaches to AI.

    And if the AI industry is barking up the wrong tree and (mixing metaphors here) trying to climb up the wrong mountain, then I’d guess that’s not great for investment in tech heavy indices right now.

  • 34 Delta Hedge July 28, 2024, 9:32 pm

    An excellent review of where we are with LLM and neural nets from Gary Marcus:

    https://open.substack.com/pub/garymarcus/p/alphaproof-alphageometry-chatgpt

    Could we be just 12 months from start of the next AI winter, like Gary thinks, or do we just need to have more ‘Situational Awareness’, like Leopold Aschenbrenner contends?

    Under a regret minimisation framework:
    – If Leopold is right, then FOMO could still be satisfied for the majority by having no more than 50% in US large caps, bearing in mind that many of the AI chasing firms are unlisted anyway.
    – And if Gary is right, then loss aversion for most people will still be somewhat assuaged by keeping the US large caps to a 50% limit.

  • 35 Delta Hedge July 29, 2024, 8:26 pm

    And here’s one today giving the funeral rites to Open AI:

    https://www.wheresyoured.at/to-serve-altman/

    I’m starting to wonder if this is like the summer of 1999, and the market goes pop after Christmas when uncomfortable realities about the ‘no show’ for an AI revolution set in.

    Still, at least it looks like Open AI has some revenues ($3.5 bn to $4.5 bn annualised). So there’s something for hope to grab hold of there.

  • 36 Delta Hedge August 30, 2024, 2:59 pm

    Nvidia now has a higher last 5 years’ return than in the 5 years after its IPO. Nuts. Contrast to BTC:

    https://open.substack.com/pub/ecoinometrics/p/bitcoin-diminishing-returns-and-being

  • 37 Delta Hedge September 10, 2024, 11:14 am
  • 38 Delta Hedge January 26, 2025, 9:05 pm

    Passed peak AI? China’s DeepSeek could be to Nvidia what the bursting of the dotcom era was to Cisco. Not so much the shoe boy moment here in 1929 as the Wile E. Coyote and the Road Runner:

    https://open.substack.com/pub/garymarcus/p/the-race-for-ai-supremacy-is-over

    https://open.substack.com/pub/bonnerprivateresearch/p/bpr-week-in-review-creative-destruction

    This could be very bad if it turns out that this is what’s been propelling markets – as opposed to the Trump trade, US economic resilience and expectations of rate cuts.

  • 39 Delta Hedge January 27, 2025, 7:41 am

    Then again, maybe it won’t be so bad for the semi conductor giants:

    https://open.substack.com/pub/citrini/p/market-memo-deepseeking-answers

  • 40 Delta Hedge January 27, 2025, 9:32 am

    Nvidia’s now down ~8% in pre-market and in the futures’ market the S&P 500 is ~2.5% off the last ATH peak of last week. It’s a sharp and steep sell off. No sign of locally bottoming out yet. Perhaps this just brings forward the day of reckoning if it turns out in any event that LLM driven Generative ‘AI’ (so called) can’t be profitable at scale even if it is commercialised and widely adopted (i.e. an effectively margin less and moat less commodification scenario with no pricing power and price trending towards the marginal cost of production, or even sold at a loss).

  • 41 Delta Hedge January 27, 2025, 10:05 pm

    Nvidia ended at 17% down on the close, at a market cap of $2.9 tn, which is >$800 bn off its’ ATH just three short weeks ago. Parsing DeepSeek likely involves a preliminary judgement call: is it for real? Or is it a Tupolev 144, i.e. not a Concorde. Assuming for the moment that it might be for real, is this just a moment with (perhaps dire) implications for the premium shovel supplier in the AI gold rush (fool’s gold???) or does it go deeper and wider. Fallacy Alarm had some hot takes this evening:

    https://open.substack.com/pub/fallacyalarm/p/lets-try-to-contextualize-deepseek

  • 42 Delta Hedge January 28, 2025, 8:10 am

    And a decent lay person’s explanation of how DeepSeek pulled it off (apparently):

    https://open.substack.com/pub/nyugrad/p/deeptrouble-deepseek-vs-all-llm-ais

  • 43 The Investor January 28, 2025, 9:37 am

    @DH @All — Ben Evans is as always extremely good on this:

    https://stratechery.com/2025/deepseek-faq/

    I note he feels AGI is getting closer with this development, which for me remains a negative, albeit in the same category as nuclear weapon proliferation, Republicans ripping up Paris, and gain-of-function research in terms of what I can actually do about it :-\

  • 44 Delta Hedge January 28, 2025, 3:09 pm

    Excellent link. Thanks @TI. You know my views on the AGI/ASI endgame (see the Bayesian version of the anthropic Doomsday Argument from statistics): increasing likelihood of outcomes are 1. utopia 2. wet squib 3. dystopic mass unemployment and 4. (combined with synthetic biology, GoF and genetically engineered multi pandemic risk) extinction (we could get 3. followed by 4.)

    In the meantime all we can do is hope for the best and if that’s not the case then maybe we get lucky and it won’t be in our own lifetimes. Whatever happens next there’s next to nothing which any of us can do individually to stop or even to slow or modify it.

  • 45 Delta Hedge January 29, 2025, 11:23 pm

    Sooo much now appearing about DeepSeek. A couple of highlights:

    Ed’s take:

    https://www.wheresyoured.at/deep-impact/

    Which in turn ref’s Venture Beat:

    https://venturebeat.com/ai/deepseek-r1s-bold-bet-on-reinforcement-learning-how-it-outpaced-openai-at-3-of-the-cost/

    The human brain (with 100 bn neurones and 100 tn synaptic connections) does pretty robust AGI with 20 watts of power, 1300 cc volume and 3 lbs mass, with perhaps 10 exp 17 FLOPS equivalent delivered (although guesses at computer emulation of the brain at the synaptic equivalent level go up to 10 exp 25 FLOPS required).

    Clearly, therefore, there’s no physical law that says that meaningful ‘thinking’ on a non-biological substrate will actually need millions of bleeding edge ($40,000 each) GPUs using up Gigawatts.

  • 46 Delta Hedge January 30, 2025, 8:33 am

    MMT Richard Murphy of left of centre taxresearch.org on why DeepSeek ends the US model of capitalism (only 8 mins but he speaks slowly so play at 2x speed):

    https://youtu.be/wMxXqDNu9_w?feature=shared

    Although this claim itself looks a bit trivial compared to Tomas Pueyo’s take on AGI/ASI over at Uncharted Territories:

    https://open.substack.com/pub/unchartedterritories/p/the-most-important-time-in-history-agi-asi

  • 47 The Investor January 30, 2025, 9:31 am

    @Delta Hedge — Indeed, a bunch of very credible people are saying AGI is now just a few years away. My best-informed friend was confident energy would be the big constraint, but unless DeepSeek turns out to be a sleight-of-hand that’s at least partly gone out the window.

    And yet we still have a consensus among readers here that AI is nonsense because it ‘lies’ about some specific tiny fact when it returns a complex answer in about 2 seconds that might take a human a day to work through. 😐

    I’m not convinced AGI is coming, perhaps because I am terrified of what will happen when it does, but the weight of evidence is surely that all this is incredibly real.

    That said I enjoy reading Ed Zitron to feel better about things…I *hope* he’ll be right, but hope is not a prediction.

  • 48 xxd09 January 30, 2025, 9:44 am

    Rather an interesting take on AI from the coal face
    One of my children as a teacher in a senior management role in a school is constantly faced with prodigious outpourings of documents from various external sources
    A simple insertion of the required document into a AI set up -AI then asked for 5 bullet points from the document – for example -and life becomes manageable once again
    I cannot imagine he is the only one doing this
    xxd09

  • 49 Ducknald Don January 30, 2025, 10:03 am

    @xxdo9 It would be amusing if the most useful function of AI is summarising the mountains of text that other AI’s produce.

  • 50 Delta Hedge January 30, 2025, 11:59 am

    @TI #47: “consensus amongst readers here that AI is nonsense”: Yeah. But it’s the wrong sort of pessimism. Risk is ASI happens (shortly after AGI) and combines with/enables/catalysises the other major anthropogenic risks (esp. re: Genetic Engineering/GoF/synthetic bio/mirror (reverse chriality) life) due to human error or malevolent use. The base odds are not good. Poundstone has a good intro. breakdown:

    https://amzn.eu/d/3XNZKuX

    From an investment perspective, I think the extremes of outcomes become more plausible from now – either US big tech is overvalued (e.g. AI fails/uncommercial) or, if not, then it’s grossly undervalued (i.e. a full on tech singularity). Neither is ‘priced in’ properly IMO in that come 2030 whereas you’d ordinarily expect the S&P 500 to be above 5,000 but below 10,000 now I’d guess it’s likely either below 5,000 or above 10,000 (perhaps a lot above). This seems to favour buying 5 year deep out of the money calls and puts with say 5%-10% of the portfolio, mentally writing off that money, and rethinking whether (or not), and if so then how, to do anything with the remaining 90%-95% (including leaving well alone).

  • 51 Delta Hedge January 30, 2025, 3:15 pm

    P.S. Guess that Damo sums up how a lot of us are thinking now about Sam Altman and OpenAI at this difficult time for them:

    http://youtube.com/post/UgkxCXe81JpFePyLPFEVKRL7WEny2NQu4fdT

  • 52 Delta Hedge January 30, 2025, 6:19 pm

    Noahpinion now on the case:

    https://open.substack.com/pub/noahpinion/p/some-simple-lessons-from-chinas-big

    The geopolitical aspect looks significant to me:

    https://open.substack.com/pub/samanthaladuc/p/this-is-the-manhatten-project-20

    And the DT has some sensible suggestions today on diversification (or, if Terry Smith is to be believed, diworsification) in this new era of disruption:

    https://www.telegraph.co.uk/business/2025/01/30/deepseek-has-blown-three-ai-myths-apart/

  • 53 Delta Hedge January 30, 2025, 8:20 pm

    An AI tsunami is breaking in Substackopia
    No moats = Ubiquity = “if everyone’s using the same tools, does anyone really have an edge?”:

    https://open.substack.com/pub/investinginchina/p/ai-automation-and-the-impact-on-investment

  • 54 The Investor January 30, 2025, 10:27 pm

    @Delta Hedge — It’s really remarkable, isn’t it? One the one hand, are we doomed? No consensus. On the other hand: what about margins? No consensus!

    I’m trying to cherrypick the best of the non-technical articles for this weekend’s Weekend Reading but it’s more like dragnet fishing on a first pass… 😉

    BTW good stuff putting these links on older articles, cheers! Usefully updates the old conversations with new breadcrumbs for the interested without crowding out other subjects on the newer posts. 🙂

  • 55 Delta Hedge January 30, 2025, 10:58 pm

    My pleasure 🙂

    It might be time to think about dusting off that draft “Buy The Robot” piece which you once mentioned IIRC 😉

    MV touched upon the subject of AI/Automation back in 2017 of course:

    https://monevator.com/weekend-reading-should-we-invest-in-the-robots-or-in-the-toys-theyll-use-to-keep-us-happy/

    Can’t myself see LLM scaling (itself and alone) leading directly to AGI, but there are still multiple approaches beyond just using either LLMs / GPTs specifically or Neural Nets / Deep Learning more generally on their own (as AlphaFold 3 showed by adding a diffusion model), and whole under explored approaches, like neuro-symbolic AI, where symbolic manipulation acts like Khaneman’s ‘System 2’ reasoning and deduction in biological brains (with the deep layer neural net operating side by side, like the rapid, reflexive and unconscious pattern recognition of System 1).

  • 56 Delta Hedge January 31, 2025, 12:28 pm

    Some more on whether or not AGI will constitute an employment elimination emergency (the first link critiques the second, which it references and cross links):

    https://open.substack.com/pub/rootsofprogress/p/the-future-of-humanity-is-in-management

    https://epoch.ai/gradient-updates/agi-could-drive-wages-below-subsistence-level

  • 57 Delta Hedge January 31, 2025, 3:14 pm

    More AI central in the blogoverse and in the ‘MSM’. Summarising:

    No AI energy crisis/limit:

    https://www.telegraph.co.uk/business/2025/01/29/net-zero-risks-leaving-britains-ai-strategy-dead-on-arrival/

    A DeepSeek deep dive on both the effects on MSTF and on the (much cited this week) Jevons’ Paradox:

    https://open.substack.com/pub/appeconomyinsights/p/microsoft-ai-demand-paradox

    The AI hype cycle in brief:

    https://open.substack.com/pub/thelastbearstanding/p/a-moment-of-disbelief

    Perhaps also worth including this one from back on inauguration day (seems a lifetime ago in AI land) for comprehensive overview of the semi/fab landscape & AI in 2025:

    https://open.substack.com/pub/aisupremacy/p/the-ai-semiconductor-landscape-2025

  • 58 Delta Hedge January 31, 2025, 8:27 pm

    This one (Exponential View) is a really excellent breakdown:

    https://www.exponentialview.co/p/deepseek-everything-you-need-to-know

    Apparently the FT has done something of a masterpiece on the sell side view of DeepSeek, but it’s fully paywalled.

  • 59 The Investor January 31, 2025, 9:02 pm

    @Delta Hedge — I think it was “own the robots” 😉 I can’t check because I deleted the draft a few months ago.

    When I was writing it (at least five years ago but actually I think it was pre-Covid so even before then) I was enjoying speculating about AI, and scaring my friends with doomy scenarios that seemed like something for their grandchildren to worry about. But I was sufficiently convinced that there was at the least an under-anticipated risk to jobs and /of/ economic disruption that everyone should own at least some tech.

    That sounds very obvious these days, because we all own loads of tech. The most passive global investor has tons of it in their portfolio, just via their S&P 500 / US exposure as you know.

    We all own the robots now. 😉

  • 60 Delta Hedge February 1, 2025, 3:06 pm

    @TI #59: If there’s one thing which I’ve learnt over my professional career it’s never delete anything, including drafts. Clients ask the same questions just in a different way. 😉 Stephen Wolfram of Wolfram Alpha logs ever keystroke he’s every made!

    If AGI/ASI arrives before Elon’s/ China’s humanoid robots (which looks likely at the moment) then we could end up in a no moat situation because intelligence could be as cheap and ubiquitous as ice in Antarctica.

    Installing intelligence into a free standing robot capable of reliably navigating the real physical world with ease might, however, present some opportunities to get ahead commercially and, maybe, to get an enduring competitive advantage.

    So Tesla with FSD and Optimus might be a better investment than trying to deliver a billion virtual Stephen Wolframs via whatever relaces the internet.

    The alternative to Mag 7 is ex US SCV of course:

    https://sellside.substack.com/p/deutsche-bank-is-small-cap-better

    Still thinking that some sort of a barbell style of allocation here might be the best ex ante risk adjusted approach.

  • 61 Delta Hedge February 2, 2025, 5:17 pm

    On robotics v pure AI plays (re above):

    “US venture capital investment in robotics has risen from around $2 bn in 2019 to more than $3.5 bn last year, according to data from PitchBook. In the first nine months of 2024, there were 130 fundraising deals for robotics start-ups — more than across the entirety of 2019. Among the most high-profile was a $675 mn investment last February by Amazon founder Jeff Bezos, Microsoft and Nvidia in Figure AI, a Silicon Valley start-up founded in 2022 that is working on a faceless, humanoid “general-purpose” robot.”

    Source: Financial Times

    In most optimistic scenario after ubiquitous AI, AGI, then ASI, and after mass adoption of all purpose humanoid robotics, comes widespread unique use cases for fully commercialised quantum computing.

    In the other scenarios… not so much.

  • 62 The Investor February 6, 2025, 7:14 pm

    @Delta Hedge — Haha, I hear you on the drafts and my experience is similar! Often I’ve been able to up-cycle something that didn’t work out into a different article or take another time. Or just found it useful reference material.

    However in this case I deleted it in a fit of pique. I’m basically just annoyed I didn’t lay down a big marker here a decade or so ago as it was already on my mind.

    BUT — plot twist! — I’m not so annoyed anymore as I’ve just discovered Josh Brown did it anyway way back in 2017:

    https://thereformedbroker.com/2017/10/16/just-own-the-damn-robots/

    Of course it’s possible I read his article and this is why I abandoned my draft all those years ago, after all. Who knows, I certainly can’t remember now! (No wonder the AIs are set to inherit the world, eh? 😉 )

  • 63 Delta Hedge February 7, 2025, 7:48 am

    More unbridled AI optimism this week from Uncharted Territories, albeit not so much on the jobs front.

    https://open.substack.com/pub/unchartedterritories/p/ai-weeks-when-decades-happen

    These guys aren’t so sure though:

    https://open.substack.com/pub/thezvi/p/the-risk-of-gradual-disempowerment

    Taking in the really big picture – where the further back you peer then the further forward you might see – the LLM/Deep Nets/Neural Nets/Machine Learning – AI/AGI/ASI – Quantum Computing mega paradigm looks to be an acceleration of a trend of increasing frequency and intensity of phase transitions in the human condition, going from the social/cultural/ agricultural to the energy/industrial/urban to the informational/intelligence (overlapping) ‘revolutions’.

    It’s not as such deterministic, but there does seem to be a coupling of technology to historical contingency, with each phase then resulting in more extreme, more unpredictable and more impactful negative and positive outcomes.

    It may or may not be a singularity which we’re headed towards, but it does look like it could well be another such major epochial transition:

    https://youtu.be/t1qxJI9nc2g?si=SuCkOmAYSnNNVIh8

    Time will tell if we make it to the other side of the transition as we have in the past.

  • 64 Delta Hedge February 7, 2025, 8:15 am

    @TI #62: better to be too early than too late. Just one TSLA, NVDA or PLTR ‘right call’ made 5 to 10 to 15 to 20 years out from the firm going on to achieve market dominance makes the portfolio’s return. That’s another way to read Bessembinder.

    It’s not about competitive edge.

    Competition is for losers.

    It’s about firms which create their own market, be it cutting edge GPUs, BEVs with FSD or search based on relevance ranking.

    If you do it the same as everyone else you get the same results as everyone else.

    To the extent that there’s any trait frequent to Bessembinder’s elite 0.25% to 0.3% of companies by value creation, then that it.

    That might mean growth at unreasonable prices or enduring years of losses.

    The problem is that the traits of outsized success are also those of failure and bankruptcy (Nikola anyone?)

  • 65 The Investor February 7, 2025, 9:25 am

    @DH — Agreed, and to be clear I didn’t post the article but I absolutely owned my share of the robots… 😉 (Albeit traded around a lot post-2022, for good and in 22/23 ill!)

  • 66 Delta Hedge February 8, 2025, 7:31 pm

    Post DeepSeek, here’s an excellent ‘Schumpeterian’ dissection of the Jevons Paradox and whether it exists as such:

    “Jevons Paradox Does Not Support a Bullish Thesis for AI Tech Stocks”

    http://uk.investing.com/analysis/jevons-paradox-does-not-support-a-bullish-thesis-for-ai-tech-stocks-200615403

    As this cross cuts into the ‘What to do if you’re queasy about the US stock market’ valuation piece, I’ll put some thoughts there.

  • 67 Delta Hedge February 9, 2025, 6:55 pm

    Another take, this one mixed in its conclusions, on DeepSeek and the Jevons Paradox:

    https://open.substack.com/pub/nyugrad/p/jevons-paradox-brings-price-destruction

    Maybe AGI/ASI, were it to arrive, will be disaster. But if it’s not, then we can’t say who or how will benefit, only guess.

  • 68 The Investor February 10, 2025, 11:22 am

    @Delta Hedge — Well I agree with that SubStack. AGI would be such a game changer (at least if possible at scale) then anything could happen, certainly massive disruption across all incumbents. If it’s a singularity type arrival (self-iterating towards super-genius) then even if it isn’t ultimately terrible for humanity all bets are off.

    Whatever Plains Indians had the best access to buffalo hunters or the most excellent medicine men didn’t matter much when Europeans arrived with the industrial revolution and smallpox…

    Hassabis saying today AGI is five years away. From memory he was more a decade-plusser before:

    https://www.cnbc.com/2025/02/09/deepseeks-ai-model-the-best-work-out-of-china-google-deepmind-ceo.html

  • 69 Delta Hedge February 10, 2025, 8:49 pm

    @TI: with great power comes greater responsibility.

    Demis Hassabis’ Google Deep Mind Alpha Fold just this evening used as a case study for the undoubted benefits of machine learning/ narrow proto-AI over on veritasium:

    https://youtu.be/P_fHJIYENdI?feature=shared

    Sounds all wonderful unicorns and fairy dust but…what happens when this tech is improved 100,000x, becomes over 1,000 cheaper and is then made available to tens and hundreds of millions of people to use unsupervised.

    Fully synthetic biology. From digital to DNA (or RNA) home synthesis, including for reverse chirality bacteria. It’ll be a disaster. 99.99% safe usage isn’t enough. It’ll just take one nutter.

    We don’t need AGI or misalignment (much less still an ASI malevolent takeover, Terminator style) for this to be our last collective mistake.

    Malevolent autonomously acting ASI is probably a fantasy (thankfully).

    What unfortunately isn’t a fantasy is that throughout history there are individuals who try and do crazy things.

    Until now, they’ve been pretty limited in the harm most of them can do.

    But if we allow this technology to proliferative in an advanced form that will no longer be the case.

    We’ll probably have had it.

    Worries about AI created joblessness or inequality or loss of control or of deep fakes, scams and misinformation won’t hold a candle up to the extinction risk.

  • 70 Delta Hedge February 13, 2025, 8:51 pm

    OMG. This is 2 LLM agents discussing AI in the style of a podcast:

    https://www.notyouradvisor.com/p/what-does-ai-think-about-gary-marcus?

    The era of AI induced mass white collar unemployment may be close to hand.

  • 71 Delta Hedge February 15, 2025, 9:29 pm

    Technocracy today = technogarchy tomorrow = techno feudalism the day after:

    https://youtu.be/_LQa28X-1AQ?si=-mA3Seze106_qOwt

    Damo/Martin Niemöller has spoken: “First they came for the delivery couriers, and I did not speak out— Because I was not a delivery courier. Then they came for the taxi drivers, and I did not speak out— Because I was not a taxi driver. Then they came for the creative freelancers, and I did not speak out— Because I was not a creative freelancer. Then they came for me, a salaried professional— and nobody cared because algorithmic gigwork apps were normalised by then. The gigwork apps of the technogarchy are coming for all of us”

  • 72 Delta Hedge February 17, 2025, 8:07 pm

    (Piece from today) Ed’s still not at all impressed with Generative ‘AI’, so called:

    https://www.wheresyoured.at/longcon/

    Is this the internet in 1994, just as Netscape came of age, or in 2000 in the moment of Pets.com?

  • 73 Delta Hedge February 26, 2025, 7:11 pm
  • 74 Delta Hedge February 28, 2025, 2:01 pm

    What *if* this dip is different?:

    https://www.thelastbearstanding.com/p/max-stupid

    And, if it is different, then is this because of souring AI sentiment?:

    https://open.substack.com/pub/garymarcus/p/hot-take-gpt-45-is-a-nothing-burger

    Or just because the orange man baby has gone full caps TARIFFS again?

    Occam’s razor seems to favour the latter explanation, but it could be both.

    Nvidia’s consensus forecast busting most recent earnings have definitely not helped the stock, so a mixed causation might be right on the money this time round.

    In any event, so far at least, the ‘Mump’ regime has not been particularly kind to US equities, especially, and ironically, to Tesla.

  • 75 Delta Hedge February 28, 2025, 2:16 pm

    Got timed out on editing so please excuse the Part 2 here.

    Meta looks to me like the stand out in the Mag 7.

    Still relatively cheap on fundamentals despite a truly epic recovery from 2022.

    Still investing into VR which so called Gen ‘AI’ should help with.

    Most likely with Apple to benefit from cheaper LLMs (re DeepSeek).

    No wonder Mark Zuckerberg wears expensive watches these days 😉

    Alphabet looks the most vulnerable.

    No discernable AI strategy left.

    Google search is thoroughly enshitified and it’s most likely to be immediately negatively impacted by LLMs.

    Tesla still has big optionality, and might be a buy for the brave.

    But President Musk’s self sabotage and lower cost but still high quality Chinese EBVs are a massive threat, especially outside the US, even if Tesla pulls out all the stops on FSD.

    Apple looks like post growth.

    Amazon and Microsoft are somewhere in-between IMO.

  • 76 Delta Hedge March 1, 2025, 1:15 pm

    Altman falling on hard times with GPT 4.5 🙁

    https://open.substack.com/pub/garymarcus/p/openai-in-deep-trouble

    My heart bleeds

    LLMs might be hitting the scaling wall but there are, of course, many more promising approaches, including inference.

  • 77 Delta Hedge March 1, 2025, 9:13 pm

    LLMs are distorting mirrors of their training set.

    Will AI emerging from them be something so changed it will be alien? Beyond comprehension.

    Yet feeding upon our culture. Our creativity. Our fears. Our hopes. Our dreams. Our minds.

    A dark mirror.

    A disembodied presence.

    But an all and always present one. An oceanic sentience.

    Solaris:

    https://youtu.be/ilsdMoQgBDs

  • 78 Delta Hedge March 6, 2025, 7:33 am

    On the other hand, according to Gary today, AGI’s still nowhere near:

    https://open.substack.com/pub/garymarcus/p/ezra-kleins-new-take-on-agi-and-why

  • 79 Delta Hedge March 9, 2025, 10:53 am

    Obviously I go back and forth on this one endlessly ( 😉 ), but here’s a study last month suggesting that current approaches to AGI would need ~10exp26 (a number with twenty six digits, so of order a hundred trillion trillion) parameters (compared to several hundreds of billions of parameters in the frontier models today), and so many Nvidia H100 GPUs would be required that they’d cost 40 million times Apple’s market cap:

    https://arxiv.org/abs/2502.18858

    On the other hand the human brain manages a modest general intelligence on just 20 to 30 watts contained in 1.35 kg and 1.4 litres.

    As Jeff Goldblum, playing Dr. Ian Malcolm, put it in the adaptation of Michael Crichton’s Jurassic Park, “Life finds a way”.

    So AGI it must be very doable ‘somehow’, and the human brain is of course very far from optimised for cognition etc.

    So in terms of basic physics some form of ASI, and not just AGI, seems possible, albeit that I’m increasingly doubtful LLM scaling alone can get to either.

    The big danger it seems to me in terms of achieving early technical success here is that LLM has monopolised the field and when it doesn’t deliver (GPT 4.5 could be the sign) they’ll be an overreaction as large as the hype cycle.

    So, we’d be looking at a scenario of a crash (which would only be partly justified) followed by an AI winter (as after previous hype cycles), and which would not be justified.

    That mirrors the TMT/Dot com ‘bubble’, which went parabolic in 1998/99 but was partly justified followed by a crash to 2002/03 the full extent of which with hindsight was not (NASDAQ from 1,000 to 5,000 1998-2000, back down to 1,000 in 2002 and now at about 20,000 in 2025).

    As in 2002/03, the AI winter, if it is coming, could be the time to go significantly ‘in’ on AI related and adjacent firms and (assuming that it follows a similar crash trajectory) quantum computing companies.

  • 80 Delta Hedge March 9, 2025, 11:54 am

    Although a further thought occurs: which is that time is the friend of good firms and the enemy of bad.

    Thematic AI & robotic ETFs have actually underperformed the SPY (and by quite a bit).

    This is not a good thing if one is expecting (or even just merely hopes) for a ‘rerun’ of the tech ‘pump and dump’ of 1998-2002 followed by the seemingly sustainable expansion since 2012 (the Twainism in operation that history rhymes but does not repeat).

    This suggests that today’s AI specific stocks, at least on an ETF thematically weighted basis, are rubbish at capturing upside potential vis a vie the Mag 7 (or Mag 6 or FAANG)

    As the Mag 7 (or any reasonable variations on them) are already more than adequately represented (and their performance captured) in (and by) both SPY and QQQ, what investor benefits do thematic specific approaches actually offer here? (Leaving aside the increasingly likely awful timing of going into them now, versus waiting for the full onset of an AI winter).

    As for quantum stocks, they’re having an ARK invest moment as money flowed in after Alphabet’s Dec 2024 announcement of the Willow chip breakthrough on error correction and Microsoft’s February 2025 statements on progress in topological quantum computing.

    Those caused inflows into the Defiance quantum computing (QUTM) ETF to explode, and the price to pump. That needs to unwind before it might be clearer whether (or not) it could realistically offer up anything exposure wise over and above the tech element of SPY/QQQ.

  • 81 Delta Hedge March 16, 2025, 10:15 am

    *If* this isn’t BS (and it always could be, but there does seem to be experimental work/ research going on here) and *if* it can be adopted at scale, then it could be a ‘game changer’:

    https://open.substack.com/pub/fractalcomputing/p/the-black-swan-event-about-to-hit

    And *if* it worked and *if* Jeavons Paradox holds true then this would be a + for the semis and hyperscalars but otherwise a –

    For Apple though, and in either event if it was successful, then it could screw with their bespoke M series chip architectures, so they’d have to retool.

  • 82 Delta Hedge March 20, 2025, 2:15 pm

    More today on whether AGI (or even just narrow AI, as opposed to machine learning) via LLM approaches is a “hoax”:

    https://open.substack.com/pub/aisupremacy/p/is-agi-a-hoax-of-silicon-valley

    TL:DR (quoting from a survey of 475 AI researchers, reported in the March 2025 88 page report of the Association for the Advancement of AI sponsored Presidential Panel on the Future of AI Research) “yes” it’s a hoax. 76% thought LLMs unlikely to very unlikely get us to AGI.

    Where does this leave us now?

    Well, everyone knows this information.

    It’s not privileged or proprietary. It’s published.

    So if you accept even just weak EMH it’s in the price already, right now.

    Say 5700 on the S&P 500 today (or 6100 before Trump’s tariff tantrums) represents and embeds the collective expectations of (in aggregate) AGI failure.

    Of course, a lot useful could be done with just LLMs well short of ever getting to AGI (at least that route).

    If fractal computing architecture optimisations can deliver 1,000x + speed ups and looking at the task specialisation DeepSeek used to gain efficiency then there’s room for huge quantitative improvements in compute input which should give some (diminishing return) gains on functional outputs.

    So there’s still some basis for elevated US large cap growth valuations in a world with no true AGI.

  • 83 Delta Hedge March 25, 2025, 10:06 pm

    The sages don’t seem to be feeling the AGI vibe (as much):

    https://open.substack.com/pub/thezvi/p/on-not-feeling-the-agi

    Check out the epoch.ai Gate AI and Automation outcomes simulator though (referenced therein):

    https://epoch.ai/gate

    Mind-blowing 😉

  • 84 Delta Hedge March 28, 2025, 11:42 am

    Here’s the link to the preprint paper behind the epoch.ai AGI/ASI economic simulator linked to above:

    https://arxiv.org/abs/2309.11690

    There’s a wider span of guesses/prediction here than any other area I can recall.

    The above pegs the scenario with only 5% unsubstituted human labour and an elasticity of -0.2 as leading to 6.4x10exp7 (64 mn fold) increase in output, whilst 25% unsubstituted at an elasticity of -2 gives ‘merely:’ an 8 fold increase (a range of 8 mn times).

    Daren Acemoglu, meanwhile, puts the cumulative gain from LLMs to US GDP over 10 years at just 0.93% to 1.16%, with generative ‘AI’ contributing 0.68% or just 0.066% p.a.

    So, going from the most optimistic to the most pessimistic, you’re looking at a range of ten orders of magnitude (10 bn fold), or 10 OOM overall, albeit that the lowest guess for impact is over 10 years, and the highest spans a wider range of time scales (from 10-20 years. out to 80 years mentioned, with the epoch.ai simulator using 2045 as a baseline, so 20 years).

    I think having a modest to meaningful (5% – 20%+) asymmetric exposure here makes sense, but maybe wait to see if there’s a pull back or, ideally a full on crash (a la 2000-2002 ideally) first; and then going with a hedged bet between, on the one hand, Chinese tech firms (there’s an ETF for that 😉 ), and on the other hand, an equal split of TSLA (FSD/ robotaxis/ Optimus humanoid robots/Grok) / PLTR (AIP etc) / Meta and Alphabet (for own models) / NVDA and ASML (semis).

  • 85 Delta Hedge March 31, 2025, 11:33 am

    I don’t know if you’ll pick this up @TI but in case so then you absolutely must get hold of a copy of Robin Hanson’s “The Age of EM” (as in Emulations) (OUP 2018).

    It’s Ray Kurzweil’s “The Singularity is Near” (Viking 2005) and “The Singularity is Nearer” (Penguin 2024) on fast forward.

    I’d also warmly recommend here Physicist Max Tegmark’s “Life 3.0” (Allen Lane 2017).

    We’re either on the brink of the greatest transition since the emergence of biology on this planet likely leading to either extinction or transcendence or it’s not and never will happen in which case maybe the US mega caps fall 20% to 30%. I guess we can all agree the first two possibilities are respectively more terrifying and exulting than the third.

  • 86 Delta Hedge April 13, 2025, 10:42 pm

    So, on the new “AI 2027” scenario/ warning:

    This does look rather déjà vu with Leopold Aschenbrenner’s “Situational Awareness” from last year.

    The link to the “AI 2027” scenario is in the comments to the MV w/e reading on 12/4/25.

    If you believe that LLMs are not going to scale, and that AI/ AGI/ autonomous advanced ASI are not happening soon by any other methods, then nothing in the 71 page long “AI 2027” screed (it can be downloaded as a PDF) will alter your view.

    I can understand your skepticism if you are in this camp. It’s my own natural home.

    But… there’s just a chance (albeit, I think, only a teeny tiny one) that this is on the mark, and that we really are at the precipice.

    Here’s some hot takes from a good Substack round up (which also links back to an interview on the scenario):

    https://open.substack.com/pub/thezvi/p/ai-2027-responses

  • 87 Delta Hedge April 14, 2025, 6:40 pm

    Do the kind of extreme scenarios postulated in AI 2027 (and Situational Awareness) represent:

    – Realistic depictions of plausible dystopic or catastrophic terminal events and utopian off ramps?
    Or:
    – Misplaced attempts to warn of the promises and perils of AI that will not materialise (at least foreseeably soon)?
    Or:
    – Part of the AI investment hype cycle complex?

    There’s some pretty wide outcome ranges to the assumptions underpinning the AI 2027 scenarios, e.g. going through the Appendices, whilst the authors give a most likely figure of 95 years to go from Superhuman equivalent AI Researcher to full ASI the range they quote at an 80% confidence level goes from just 2.4 years (which I think they use in their scenarios) to 1,000,000 years – that’s not a typo, their estimate at 80% CI really stretches to a million years.

    It does make one wonder about what is the appropriate degree of confidence to have in some of the other numbers? This – ASI – is, after all, and inherently, extremely highly speculative stuff, maybe more so than any other possible (and possibly near term) high impact anthropogenic ‘event’.

    The reality at the moment is that pure LLM approaches seem to be hitting a scaling wall and are wildy unprofitable at least in terms of their current commercial viability; as Ed points out in great detail only today:

    https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the-tech-industry-2/

    So maybe these scenarios are getting just a little bit ahead of themselves?

  • 88 Delta Hedge June 21, 2025, 8:14 pm

    As it’s a year on now from the above Call Centre’s piece, and given both the 3 yr ‘event horizon’ on comments and that this could turn out to be the most significant Monevator piece so far, perhaps I’d better update with a new report from Morgan Stanley and an article in May from Cal Newport dealing with the 2nd order effects, namely:

    1. No imminent AGI, much less still ASI. Forget “AI 2027”. As Adam Curtis might put it, “but it was all a fantasy” (or as Cal says “an incredible claim”). So chill on the Existential Risk.

    2. Looks like no economic transformation to pale the industrial revolution or the introduction of electricity; but also the real world effects are distinctly non trivial.

    3. Even mundane utility can be quite high impact if deployed at scale, which it is here: e.g. Chat GPT 800 mn weekly active users by April 2025 (and first 1 mn users in just 5 days, much faster than the iPhone (74 days) and Netflix (3.5 years)), and with over 365 billion annual searches reached in just 2 yrs (took Google 11 yrs). But still plenty of room to grow more though given that Alphabet handles ~5 trillion searches p.a., or 14 bn per day. Likewise Gemini 5x developers to 7 mn in last yr.

    4. Picks and shovels playing out for now: Apple, NVIDIA, Microsoft, Alphabet, Amazon and Meta CAPEX up 63% last yr to $212 bn, Microsoft’s AI product revenue $13 bn run rate in 2024, a 175% yearly increase, Anthropic’s annualised revenue 20x to $2 billion in 18 months, Perplexity’s annualized revenue 7.6x to $120 million in 14 months.

    5. Consumers benefitting: Training dataset sizes have grown by 260% annually over the past 15 years, training compute power (FLOPs) increased 360% annually over same period, cost of AI inference (per 1 mn tokens) dropped by 99.7% in last two years.

    6. But the biggest beneficiaries to come will be the firms of the future leveraging up this: e.g. early auto manufacturers faced intense competition, but 2nd order beneficiary like Walmart, which thrived due to car facilitated suburbanisation, had >1,600x return 1980 to 2020, (to Ford’s 23x), and, whilst WiFi routers were commoditised, Netflix did >500x total return 2002 to 2020 (to Cisco’s 5x).

    7. So likely that the it’s “all just a flash in the pan” naysayers are as wrong as the “we’re all doomed” clapper board crowd; and that the early prediction of just 0.67% GDP impact from gen ‘AI’ over a decade (and a mere and meagre 1.16% one off uplift from ‘AI’ overall) looks as light as the tech bros singularity looks indefinitely postponed.

    Time to get real and work out now who actually benefits and who loses, when, how and by how much.

    Here’s the report and article:

    https://www.morganstanley.com/im/publication/insights/articles/article_investinginsecondordereffects_ltr.pdf

    https://calnewport.com/ai-and-work-some-predictions/

  • 89 Delta Hedge June 28, 2025, 10:47 am

    Daniel Kokotajlo has adjusted his estimates, and has moved his median from ‘AI 2027’ to ‘AI 2028’ based on events since publication. Maybe this will be like one of those delayed trains where it shows on time until the departure time and then the arrival time goes back by 1 minute every minute 😉

    Meanwhile Less Wrong has a new detailed take down of “AI 2027” (19 June 2025, Titotal: “A deep critique of AI 2027’s bad timeline models”). Worth a read IMO.

    And RAND has a study on AI X (extinction) risks which concludes as follows:

    In the three scenarios examined — nuclear weapons, pathogens, and geoengineering — human extinction would not be a plausible outcome unless an actor was intentionally seeking that outcome. Even then, an actor would need to overcome significant constraints to achieve that goal. Analysis under uncertainty requires specific analytic approaches. X threats posed by AI are immensely challenging but cannot be ruled out. X threats occur over long timescales, allowing time to respond. AI would require four capabilities to create X threats: (1) integration with key cyber-physical systems, (2) the ability to survive without human maintainers, (3) the objective to cause human extinction, and (4) the ability to persuade or deceive humans to avoid detection.

    AFAICT #(4) looks to be in the bag for LLMs already so that leaves three to go for future AI…

    Here’s the study:

    https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3000/RRA3034-1/RAND_RRA3034-1.pdf

  • 90 Delta Hedge July 4, 2025, 2:59 pm

    Re: My April 14th 2025 comment #87 and the 2.4 year /95 year /1 mn year figures:

    To clarify, from this recent Future of Life Institute interview (link below) with “AI 2027” scenario co author Daniel Kokotajlo (on “Why the AI Race Ends in Disaster”): the timelines for going from a Superhuman equivalent AI Researcher to full spectrum ASI at 95 years is the peak of the distribution of the possible range of time figures (i.e. the single most likely figure for the time taken within the relevant range of possible times running from just 2.4 years right out to 1 mn years) without using any AI to accelerate the process – i.e. relying only on human AI researchers (with the equivalent peak in the distribution of the range for the timelines to develop Superhuman equivalent AI researchers using only human AI researchers then being 19 years).

    Using first AI to accelerate the development of a Superhuman equivalent AI researcher and then using the Superhuman equivalent AI researcher to accelerate the development of full range autonomous ASI very greatly reduces the peak in the timeline to just 6 months for the first stage and 3 months for the second – i.e. less than a year overall, with a range of two months to five years, to then go from no ASI to Superhuman equivalent AI researcher and then to full of autonomous ASI.

    This is covered towards the end of the interview IIRC:

    https://youtube.com/watch?v=V7Q3DJ9V5CQ&feature=shared

  • 91 Delta Hedge July 22, 2025, 2:05 pm

    Two takes on ‘AI’ scepticism, one being somewhat half optimistic, from Gary Marcus (13th July), and the other deeply pessimistic, from Ed Zitron (yesterday, 21st July, it’s an epic 14,500 words!):

    https://open.substack.com/pub/garymarcus/p/how-o3-and-grok-4-accidentally-vindicated

    https://www.wheresyoured.at/the-haters-gui/?ref=ed-zitrons-wheres-your-ed-at-newsletter

    I’m currently with @ermine over at SLIS and also give a 60% odds that this ends badly for equity markets.

    However, in every crash there’s opportunity, and if a syncretic, hybrid neuro symbolic AGI, or even, eventually, ASI, ultimately emerges out of the failure of scaling with LLM only approaches; then in the longest run that’ll end up being all to the good for the field of AI as a whole. The Magnificent 7 rose up as much from the ashes of 2002/3 as they did the boom of 1995-99.

  • 92 Delta Hedge July 23, 2025, 3:46 pm

    The Royal Institution today has an excellent talk on YouTube by Geoffrey Hilton (2024 Nobel winner and ‘Godfather of AI’). Tries to make the case for a weights based approach being fully generalisable in principle to intelligence and sentience, and in consequence for logic and reason being properties derived and not innate (such that they can be learnt by LLMs without any pre-programming).

    I’m not convinced. There’s so much we don’t understand about the hard problem of consciousness.

    Maybe after 2010 we did strike lucky with neural nets and just happened to discover so rapidly the one generalisable approach.

    But it seems a priori more likely that we’re not heading up the one right mountain, if there is only one, or, more likely again, that what we think of as AGI will actually require a blend of multiple simultaneous approaches (and their associated software and hardware insatiations).

    Very interesting what Mr Hilton says on the implications of analog versus digital computing paradigms.

    Although not covered by him in this talk, there’s also future reversible computation and (the limited scope, but staggering speed) of quantum computation.

    Even ‘just’ in the arena of existing digital, classical, non-reversible computation, there’s likely multiple orders of magnitude improvement in effective floating operations per watt from each of algorithmic optimisation and from improvements in GPU parallelism.

    Maybe that massively underreported revenue shortfall Ed Zitron highlights won’t be such a deal breaker? Then again, the tech bubble burst in March 2000 in the US the day that (IIRC) Barrons said Amazon that might run out of cash to burn first, and not because the final destination of highly scalable tech oligopoly was ever called into doubt.

  • 93 The Investor July 23, 2025, 5:52 pm

    @Delta Hedge — On the subject of analog versus digital computing, I highly recommend George Dyson’s ‘Analogia’, which is basically a sideways look at the whole issue versus an extended (natural) history lecture and various metaphors.

    https://amzn.to/3IJuup5

    Warning: I’ve suggested it to several friends and about half of them hate it. The other half, like me, found it mildly mind-blowing. But it works more like a poem than a text book.

  • 94 Delta Hedge July 23, 2025, 8:35 pm

    Interesting. I’ll have to try on a Prime trial as an audiobook.

    I see Dyson’s previous book “Darwin Amongst The Machines” adopts the same title as Samuel Butler’s 1863 letter to the New Zealand press (just four years after Darwin’s “On the Origin of the Species by means of Natural Selection”) warning that machines would eventually replace humans.

    What I don’t get about LLMs supposed natural “emergent” behaviour is that it rests upon the feedback of the Back Propagation approach, which seems like a form of programming in reverse sequence (i.e. done after the task, rather than before, as was conventional before neural networks).

    That doesn’t seem truly emergent, at least as I understand emergence as ‘higher’ order phenomenon manifesting from more fundamental substrates (i.e. temperature manifest from molecular motion, or fluid dynamics emerging from Newtonian mechanics).

    It seems more like a form of an after the fact fine tuning – i.e. starting from the output, the network works backward to figure out how much each ‘neuron’s’, in each layer’s, calculations contributed to the error, and then adjusts the weights.

    Are the novelty and emergent properties of LLMs, and, therefore, perhaps also their potential for learning and useful new thought, being radically oversold to us?

    And, if so, then what does that mean, both for the future of AI more broadly, and for the wannabe commercial empires which are now attempting to build upon LLMs?

  • 95 Delta Hedge July 25, 2025, 7:04 pm

    Google’s AI systems processed a quadrillion tokens (1,000,000,000,000,000) in June 2025 (1 token = ~0.8 words in English), more than double the volume (480 trillion) in May. [X/Demis Hassabis].

    Data centre (DC) Capex to 2028 now pegged at $2.9 trillion, with $1.3 trillion for GPUs alone, with DC Capex hitting $900 billion in 2028 itself, nearly equal to the entire 2024 Capex of the S&P 500 ($950 billion).

    Other sources estimate a current pace of AI development scenario DC Capex of $5.1 trillion cumulatively by 2030, or $7.9 trillion in an accelerated AI race (i.e. with China).

    How long without meaningful revenues before hyperscalars etc reign in spending?

    Given its lengthening lead in specifications and chip performance, its software moat with CUDA, and its 80% share of global GPU sales; I’d guess that by the end 2030 Nvidia would be more likely (adjusted for any more splits) to be either over $1,000 per share ($25 trillion market cap, so 6x from here) or under $100 per share (40% below current cap) than somewhere between those numbers.

  • 96 Delta Hedge July 26, 2025, 10:29 am

    Some context from Morgan Stanley to the (4 year, 2025-2028) $2.9 trillion data centre capex figure (going from over $300 billion in 2025 – compared to $200 billion and $125 billion respectively in 2024 and 2023 – to $900 billion in 2028):

    “Internal operating cash flows from the hyperscalers have been the source of this spending. However, our equity analysts expect the investment needs for data centres to rise sharply over the next few years … Leveraging our equity analysts’ projections, we estimate that $1.4 trillion of hyperscaler capex may be self-funded with cash flows, leaving a sizable $1.5 trillion financing gap.”

    Can VCs/SoftBank etc fill that $1.5 trillion gap without appreciable amounts of ‘AI’ (so called) revenues appearing?

    If some form of hybrid approach achieves rapidly recursively improving ASI in the foreseeable future, then $1.5 trillion will look like spare change. A rounding error in the impact which such ASI would achieve every year.

    But just at the moment, with comparatively cr***y LLMs, which can’t properly be described even as narrow AI, yet alone a non-bottleneckable substitute for all human labour, a $1.5 trillion funding gap in the next few years looks to be a big problem.

    The Dot Com boom didn’t end because it turned out that the promise of the internet was a fraud (WorldCom aside).

    The predictions of a changed world, elevated profit levels, and of new global monopolies and new massive markets ultimately came true for some shareholders (you’ve done very well if you’ve just held Microsoft or Amazon since the mid 1990s, crashes included).

    The Dot Com market frenzy ended partly on a parabolic blow out, but mainly over escalating concerns over the burn rates of the leading ‘New Economy’ companies – such that it was feared the winners (in terms of sales CAGR) would run out of funding runway before they could become profitable.

    Might it be the case in the next few years that history – if not repeating – then at least rhymes with 1999?

  • 97 Delta Hedge July 27, 2025, 6:19 pm

    Some useful links on hard limits to LLM scaling, each explored in a rigourously quantifiable, but still thoroughly accessible way:

    Toby Ord (effective altruist):

    https://www.tobyord.com/writing/the-scaling-paradox

    Key scaling wall – unsurprisingly it’s LLM accuracy improvement: “accuracy scales with the amount of compute used as the 1/20th power.”

    This is totally hopeless.

    By my back of the envelope calculation, if we took the most extreme example imaginable, and used for LLM training purposes the rest mass energy of all the stars in the whole observable universe every second, which is about 10exp70 watts (given: ~10exp23 main sequence stars in 10exp11 large galaxies accessible to our observation, each with ~10exp30 kg mass, and with nearly 10exp17 J/kg, using e=mc2); then, for a 10exp60 fold increase in compute (i.e. 60 orders of magnitude!) over the 10Gw (10exp10 watts) used now for LLM training, we’d only get a 1,000 fold (10exp3 being the 20th root of 10exp60) improvement in LLM accuracy.

    Without considering the real world training data constraints, chip fabrication limits etc, we can see immediately that LLM only approaches must soon be at a dead end.

    Not opinion. Just maths.

    Then we need to factor in those various practical issues, as Epoch AI nicely review here:

    https://epoch.ai/blog/can-ai-scaling-continue-through-2030

    Whilst these are not actually hard and fast physical limits, like LLM’s unavoidable diminishing returns for accuracy with scaling, they still represent a compelling series of economic constraints.

    Given all this, why aren’t the big players pivoting hard and fast from LLMs into symbolic language approaches to AI realisation?

    The obvious answer is ‘maybe they are’, but, if so, then:
    – What do they gain from not going public? Nothing which I can see.
    – How have they managed to keep it secret? Impossible in practice given both the level of interest and the numbers of researchers involved.

    This means that these firms are, in effect, in the process of collectively driving off a cliff.

    Whilst no individual lemming ever got bad publicity, the market is not going to forgive this one lightly when the reckoning eventually comes.

    I’ve updated my prior sharply downwards for the $2.9 trillion (2028) or the $5.1 trillion (2030) of data centre Capex actually turning up.

    If so, then Nvidia is toast.

    My deep, abiding and profound fear here is that I’m ‘missing it’, by ‘just not getting it’, and/or that I’m making a category mistake – i.e like I did with BTC, when I could have brought it in 2015 at a low three figures, instead of six figures as now, just a decade on.

    For every “AI is snake oil” perspective (like this one here):

    https://www.aisnakeoil.com/archive

    one should also go read something from an AI booster.

    Truth is not a balancing act, but finding it does involve keeping an open mind.

    Just not so open though that your brain falls out 😉

  • 98 ermine July 27, 2025, 9:56 pm

    @TI > about half of them hate it

    I am that guy. Curiously Mrs Ermine salvaged this from the bag headed for the dump, so perhaps she can translate it for me 😉 I bought it as an honest paper book. It wasn’t worthless, but it somehow pulled the black tip of my tail. And I’m an analog(ue) guy at heart.

    AI? Feels like the runup to the dotcom bubble. Tomorrow I am going to buy AIAI. And ram a trailing 5% stoploss up its ass. I don’t believe in it, I think it is a crock, the more I see of AI the more I think my mustelid intelligence has the edge.

    But I have learned in investing sometimes you gotta bet on what you don’t believe, because dang it, sometime you’re wrong 😉 What’s the worst that could happen on a 5% stoploss on a UCITS ETF? You’re 10% down if it gaps, you’re 100% down if it flames out. I’m no Michael Burry, I ain’t got the cojones to short it. I’m already considering changing down from Mogul to Maven, because some mogul discussion it a higher place unreachable to me, family trusts, WTAF I am not worthy of such rarefied air 😉

    This is hopefully way far down in the noise that TI won’t ice me for mentioning the Dark Side too much…

  • 99 The Investor July 27, 2025, 10:19 pm

    This is hopefully way far down in the noise that TI won’t ice me for mentioning the Dark Side too much…

    @ermine — The force is strong with you, and you have your own blog. I’ve hopes of you making ten referrals and getting Moguls for free, as per Weekend Reading this weekend. 😉

    Re: Dyson’s book, my experience is the more literal-minded engineer / maths types haven’t enjoyed, those of who straddle (/bluff) have done better. So perhaps you’ll feel happy it flopped for you given that.

    @Delta Hedge — Interesting snippets you’re finding here. I’m going to watch your movie later. Possibly from behind the sofa!

  • 100 ermine July 27, 2025, 11:19 pm

    > I’ve hopes of you making ten referrals and getting Moguls for free, as per Weekend Reading this weekend.

    I could do that, but it’s not the money, I don’t give a shit about the £80, to be honest it’s noise to me. I’m not going to bum off my readers. I felt alienated, the little guy from that and some other articles by the rich kids of London. I have enough, I have been able to add value to other lives, and I owe you deeply for the red Ariadne thread outta the GFC, and I will be maven as I still want to tip the ermine hat. But I am outclassed, and it made me feel poor, which is effing ridiculous as I’m not 😉

    Anyway, I don’t want to derail DH’s fine AI thread. He’s bonkers, but he’s magic too, more power to him, I love the purity of heart, I had that once. I am still of the opinion that AI is going to shit out bigtime. I’m probably wrong, I am old, possibly out of touch, though I see endless misery for the PMC, technofeudalism as Varoufakis said. As a challenge to DH, point me at one company that is making a profit in 2024/25 from AI. Not upping its ARR, good, honest profit. Excluding Nvidia, yes, they’re making the shovels. Where the hell is the gold in them thar hills?

  • 101 Delta Hedge July 28, 2025, 8:12 am

    Thank U for the kind words @ermine 🙂

    Tempted to respond to the challenge with a list including likes of PLTR (which I’ve been HoDLing, and to date is my best individual stock performance ever), or perhaps something like Cadence Design/CDNS.

    All the big players involved in data centres, cloud or chips more generally have been making out, or are expected to make out, like bandits on their underlying fundamentals and on price performance.

    It doesn’t much matter whether you’re looking at Nvidia, AMD, ARM, TSMC, ASML, AMZ (with AWS), MSTF (Azure), Alphabet (Google cloud), or at the related support businesses like CEG (data centre energy), Applied Materials, Broadcom, Vertiv (cooling) or Arista Networks (or, for that matter, at the supposedly ‘AI’ driven cyber security plays like Crowdstike, Zscalar, Sentinel One and Palo Alto Networks).

    Heck, even dinosaur Oracle is having a big comeback off of cloud.

    It feels like the only way is up.

    A dangerous feeling to have. Sooner or later someone will have to pay the price for complacency. Maybe that’ll be me.

    Some, like Nvidia (falling from 50x to 40x forward PE, even as the price goes up), do look more or less ‘reasonably’ priced, whatever the h**l “reasonable” means in US large caps these days.

    Others though, not so much…

    As the Sherwood article in this W/e’s reading links shows, PLTR is the poster child for bonkerist valuations in this ‘cycle’.

    In fact, it’s quite possibly actually even worse than the fig’s in that piece.

    I make Palantir having an over 600x trailing and an around 300x forward PE, with 180x forward Free Cash Flow.

    Insane in Brain, as Cypress Hill would have put it.

    Of course, Alex Karp would probably say that his/Thiel’s company will be a 2, 3 or even a 4 trillion dollar market cap by 2035-40, and by then giving MSTF a run for their money.

    We’ll see I guess.

    Of course, the pure play LLMs are a basket case’s basket case. Their financials are nonsense on stilts. Boris’ inverted pyramid of piffle. Open AI’s cash flow statement must make a 2021 SPAC look respectable.

    Strange times we’re living through! 😉

  • 102 Delta Hedge July 31, 2025, 10:46 am

    “X risks” of ASI – what should we spend to mitigate them?

    Klement’s on the case today:

    https://open.substack.com/pub/klementoninvesting/p/preventing-the-terminator-scenario

  • 103 Delta Hedge August 2, 2025, 12:03 am

    Okay. More details in the FT yesterday on that Morgan Stanley report on data centre Capex.
    – capacity increasing by 6x by 2030
    – of the $1.5 trillion shortfall in the $2.9 trillion spend by 2028: $800 billion to be met by private credit asset based finance and debt funding joint ventures (for comparison the entire private credit market is $1.4 trillion now), $350 billion to come from Sovereign Wealth Funds, Private Equity and Venture Capital, $200 billion from corporate debt issuance and the last $150 billion from securitised credit/asset backed securities.
    – Of the $2.9 trillion total $1.3 trillion goes on the build out and $1.6 trillion on GPUs.
    – The total doesn’t include the cost of new power infrastructure.

    Leaving aside that this looks..ahem….just a tad bit..er….ambitious (e.g. the $900 billion in data centre Capex in 2028 alone is not only nearly the entire $950 billion total Capex of the S&P 500 in 2024, but it also completely dwarfs the peak Technology Media and Telecommunications sector spend of $135 billion in 1999/2000, at peak Dot.com):

    Then, *if* this actually ever comes to pass, we’re then looking, by my back of the napkin maths (which, frankly, is perhaps no more nor any less grounded than I suspect the Morgan Stanley’s guesses are here) at something in the order of:

    a) $1.6 trillion on GPUs cumulative 2025-28 = ~$500 billion in 2028 alone
    b). $500 billion x Nvidia share of bleeding edge GPUs @80% on margins of 80% (well above its average for all chips) = ~$320 billion in profit for Nvidia from just high end GPUs for LLMs in 2028.
    c). Add in gaming chips etc and maybe $350-$375 billion profit for Nvidia in 2028.
    d). On normal multiples that suggests something like a market cap of about $10 trillion. But, on last year’s 50x forward PE, maybe it could then be closer to $20 trillion. That in turn gives nearly 5x-10x from here by the end of H2 2028.
    All of which is, frankly pretty hard to believe, but that’s what the Morgan Stanley scenario implies to me if it ever panned out within its timeline.

  • 104 Delta Hedge August 2, 2025, 12:18 am

    That should have read, “nearly 2.5x to 5x from here by the end of H2 2028” (i.e. $400 to $800 per share, assuming no further splits, versus $170ish now). Apologies. Timed out before I could correct.

  • 105 Delta Hedge August 2, 2025, 7:24 pm

    I’ve now gotten a better handle on what a decrease in test loss score actually means per the reference in Toby Ord’s January 2025 piece to the “key graphs” from OpenAI’s 2020 paper “Scaling Law for Neural Language Models” (copy 2020 paper here):

    https://arxiv.org/pdf/2001.08361

    So test loss is cross-entropy loss, measured in nats (natural logarithms, using Euler’s number “e”).

    One converts cross-entropy loss to perplexity which is defined as:

    \text{Perplexity} = e^{\text{loss (in nats)}}

    A test loss score of 6 means e raised to the 6th power (which is 403) wrong guesses for every right one, i.e. the model is about as clueless as someone picking randomly from 403 options.

    And a test loss score of 3 mean e raised to the 3rd power (which is 20) wrong guesses for every right one.

    So, whilst it takes a million fold increase in computational resources to go from test loss score of 6 to one of 3, that’s a lot more than just a 2 fold increase in accuracy (i.e. it’s not a test loss score of 6 divided by one of 3 factor of improvement).

    Still, it is only a 20 fold improvement (i.e. from 403 wrong guesses down to 20 wrong guesses per right one).

    A test loss score of 0.01 would mean about only 1 percent of first guesses would be wrong.

    That would require 3/0.01 raised to the 20th power (i.e. 300exp20) more compute than a test loss score of 3, which actually is 3.4x10exp49 times as much.

    Although this is (much) less than the back of the envelope figures in my example at #97 above, it is still utterly hopeless, and not just hard.

    Indeed, it’s impossible in the physical universe as we know it (Lloyd’s Limit, Landauer’s Bound, etc.)

    So, the point remains, we’re never getting to a robust and reliable AI that can substitute for all human cognitive labour (say a test loss score of 0.01, plus fully generalisable) via LLMs on their own.

  • 106 Delta Hedge August 4, 2025, 12:20 pm

    One of the frustrating aspects I’m finding about the discourse between the curious (and somewhat informed) and the field insiders in discussions of machine learning and AI is an absence of distinction being made between issues which are truly foundational to the field and those which are merely coarse grained emergent properties (or possible future aspects).

    It’s not quite the distinction between the foundational and the phenomenological in epistemics (especially philosophy of physics), but there’s a resonance of that distinction here.

    The failure, so it seems to me, to address the foundation issues (and ultimately these issues, whilst technical, are foundational because they will surely determine whether or not, and, if so, then when, how and to what extent AGI and/or ASI (as commonly understood and defined) will actually ‘work’.

    So, just in the last couple of days there’s been multiple podcasts discussing the effects of AI on work and the economy, for example Calum Chance’s interview with the Future of Life institute.

    But these just assume the technical issues. They don’t explain less still explore them.

    Yes, as Mr Chance (referencing physicist Max Tegmark’s 2017 work on AI in, IIRC, “Life 3.0”) notes that in 1900 there were 22 million horses in the US and now there are fewer than 2 million; but with respect in 1900 it was known with certainty that the Internal Combustion Engine worked (for both well understood theoretical reasons and from empirical observation).

    We just don’t know that (yet) for AGI.

    Now, the likes of Dwarkesh Patel, who’s usually fairly boosterish on ML and AI, are getting closer to actually discussing the deeper, more relevantly useful, issues underlying the implementation both of what we’ve got now and what’s in active development, e.g. see here:

    https://youtu.be/nyvmYnz6EAg?feature=shared

    But what I really want to see discussed is why is it that scaling accuracy (i.e. cross-entropy loss reduction expressed as the exponent of the base of the natural logarithm) is (at least so far) only improving at the one twentieth power of the computation used in model training.

    This is what those AI labs, like OpenAI, which are publicly committed to LLM only approaches, really need now to address.

    Otherwise the rest of what they’re saying seems to me to be not much more than a plausible sounding, but ultimately foundationless, admixture of marketing puffery, flimflam and hand waiving.

    I also want to see how other approaches, including programmable formal symbolic logic, can be deployed at scale and at pace and economically; and then integrated with the (limited, but with some successes) neural net paradigm (which is the base architecture for LLMs).

    Strangely, even though Alphabet is widely criticised amongst growth investors (e.g. the two Toms – Lee (of Fundstrat) and Nash (YouTube)) for not having a fleshed out AI strategy (although that’s a relative judgment given Apple’s inactivity in this area), it may be that Google’s DeepMind shows that it has greater institutional capacity and willingness than its competitors to pivot from LLMs to what may be more fruitful approaches to achieving AGI, including hybrid neuro symbolic ones.

    Think about it for a moment: the others, especially OpenAI, are culturally wedded to the transformer paradigm and have huge sunk costs (pecuniary and personal) in that approach as *the* way forward.

    Not only can an oil tanker not do a tight turning circle, but ‘skipper’ Sam (Altman) is not going to (or will be slow to) even begin spinning the steering wheel.

    In contrast, look at the prospects for DeepMind here:
    – Strongest academic depth (AlphaFold, AlphaZero, Gato, Gemini)
    – History of non-scaling breakthroughs (Monte Carlo Tree Search, Differentiable Neural Computers, MuZero)
    – Gemini 1.5 has efficient context scaling, maybe sparse models.

    And Alphabet is relatively cheap because no-one is seeing through the apparent lack of an LLM strategy to deal with the threat to core Google search.

    Oh. And of course with Alphabet you get as a bonus access to Waymo (and its Uber tie in) as the robotaxi hedge against Tesla.

    It’d be perhaps a tad ironic if history repeats and Google ended up with the killer AI approach, just like they did when they just came in out of the blue with back links for search, which were immediately successfully, and obviously much better than what Yahoo and Lycos offered (even though, pre 1998, the latter two, and not Google, were hyped as the gatekeepers to the internet).

  • 107 The Investor August 5, 2025, 2:59 pm

    Good one to add to your reading here Delta Hedge:

    No one could have avoided hearing the loud, repeated, increasingly shrill insistence that we stand at the precipice of a great revolution in every aspect of human life, thanks to “artificial intelligence,” which here refers to large language models that return strings which are deemed algorithmically likely to satisfy written prompts. Alexander, for his part, insists that AI is “not ordinary technology.” The AI era we’re supposedly living in has generated the most profound hype cycle in American media, arguably, since the post-9/11 terrorism freakout. And yet there’s a bizarre refusal to accept that the maximalist position has won the news cycle.

    https://freddiedeboer.substack.com/p/the-rage-of-the-ai-guy

  • 108 Delta Hedge August 5, 2025, 9:24 pm

    It’s a fantastic piece @TI. The header image is the one which I have in my mind’s eye of @ermine railing at the computer 😉

    Where I think that the piece misses the mark though is that the issue is not one of ‘AI’ at all, for “AI” does not exist, and may never exist.

    The issue is with the inherently, and unavoidably, unreliable statistical stochastic parrot called an LLM, which definitely is not, and won’t of its own accord ever become, “AI”.

    AI is not, and will never be, an accurate or appropriate description of LLMs.

    The term AI is being used to try to convince VCs, SWFs, SoftBank etc that ‘real AI’ has already been achieved, and just needs some wodge to get it polished up and deployed into the economy.

    This is not the case.

    AI may be possible, but not through only using LLMs.

    That is not an opinion.

    It is a mathematical fact.

    With an ordinary 32 Bit computer we already get an output error rate of just one in every 4.3 billion operations (i.e. one in every 2exp32).

    But, with LLMs still on test loss scores of around 3, just one in 20 responses in LLM training are error free.

    They then have to then be corrected manually, defeating the whole automation process.

    We could put a Dyson swarm around the Sun, and use every watt for compute, and LLMs still would only improve quite modestly in their accuracy, reliability and novelty of response in this regard.

    As a digression, it is more than a bit like (but much worse than) Elon calling his ‘Starship’ a starship when it is a low earth orbit launch system capable of reaching the lofty heights of just 400 km (which the Soviet first achieved back in 1957). In comparison, the nearest star outside our own lies at a distance of 40 trillion km, 100 billion times the distance his so call ‘Starship’ reaches from the Earth’s surface. FAPP, compared to a true star ship, it is closer to the Wright brothers’ Kitty Hawk, or even a paper air plane.

    Anything that is called AI is not going to successful substitute for human labour unless and until it can do so consistently and reliably.

    More to the point, unless the approach to attaining AI can produce something meaningfully and usefully beyond the frontiers of existing human knowledge, then it is not going to either expand or accelerate the growth prospects for the economy.

    Innovation requires new discovery, not rediscovering via regurgitation with a partly hallucinatory version of what we already know.

    LLMs cannot and will never do genuinely new breakthroughs by definition, because their architecture simply does not and cannot permit it.

    They don’t reason. They associate text.

    Given only access to 19th century physics, they are not ever going to be able to come up on their own with either General Relativity or with Quantum Mechanics.

    So, they’re not going to ‘solve physics’ or anything else which has eluded solution to date.

    This is not nit picking, but fundamental to understanding the economic payoff (or lack thereof).

    You wouldn’t have a pilot who can safely land the plane on only one in 20 attempts (or even, for that matter, on 19 out of 20). Nor would you employ a pilot who can’t reason in a predictable way from first principles, and who is basically a zombie, imitating some of the surface features of an intelligent entity, but lacking a capacity for thought in a meaningful sense.

    The misuse of the AI label is a much more severe and generalised version of the presentation problem seen where Elon hypes Level 3 as basically Level 5 FSD, but applied here to cognitive labour.

    Once the scaling wall is understood, it becomes clear that the winner in the AI race, if the race can be won or ever has an end, will be the company or the country which first abandons LLMs for a different approach (or set of them), being alternatives which at least have a chance.

    I think Google look least committed to the LLM hoax, and the most likely company to steal a lead on its rivals by doing something different, as they have with AlphaFold.

    As the brain is basically just a load of particles following predictable natural laws, it would seem – in principle – that there is a way to achieving genuine cognition outside of organisms evolved through Darwinian selection. So this is not like speculation about something inherently impossible like FTL travel.

    But that way, if it exists, sure as hell is not going to be through using only LLMs. At the moment we seem to be in the insanity phase of trying the same thing over and over again with LLMs in the hope the outcome will be different.

  • 109 Delta Hedge August 10, 2025, 9:57 am

    Dyson book – Analogue computers:

    A very nice summary on Veritasium here of AC from just before (March 2022) the start of the current deep neural net / LLM ‘explosion’:

    https://youtu.be/GVsUOuSjvcg?feature=shared

    Surely AGI if/when it arrives will be:
    1. Part digital and part analogue
    2. Part classical (for general purpose computing) and part quantum (for very specific tasks, where quantum offers exponential speed ups)
    3. Part formal logic, symbolic reasoning and programming; and part neural nets, weight adjustment, back propagation and inference.
    And:
    4. Where digital, then using a dynamically optimised mix of:
    a. general-purpose chips: CPUs, GPPs and MPUs;
    b. graphics & parallel processing: GPUs;
    c. AI & machine learning chips: IPUs, NPUs, TPUs and LPUs;
    d. embedded systems: MCUs, SoCs and ECUs;
    e. signal processing: DSPs; and,
    f. custom hardware: FPGAs and ASICs

  • 110 Delta Hedge August 13, 2025, 12:40 pm

    Gary again

    https://open.substack.com/pub/garymarcus/p/llms-are-not-like-you-and-meand-never

    But where are the solutions? Anyone can criticise. We need solutionism not sniping from the sidelines.

    All the main model providers to some extent try and improve on ‘pure LLM’ by integrating other approaches and/or by building in various sorts of ‘checks and balances’ (e.g. encouraging smarter prompts and few‑shot examples, retrieval‑augmented generation to ground answers in verified data, automated output checking with evaluation tools, using a secondary “LLM‑as‑Judge” to review outputs, periodic human in‑the‑loop review, supplementing model answers with structured knowledge graphs, formal logic verification using theorem provers or solvers, and running multiple models in parallel for consensus).

    However, there’s only seemingly sparse implementation of integrating formal logic and mathematical programs to fix the faults with LLM accuracy (mainly in research or niche tools) and, whilst providers are exploring this for reasoning, they’re not broadly deploying this yet, seemingly due to overhead.

    OpenAI’s o1/o3 models use internal reasoning chains, but have no full theorem prover integration like Lean/Z3. Google is most advanced here with AlphaProof / AlphaGeometry integrating LLMs with formal systems (e.g., Lean) for math proofs, and Gemini here uses logic critics in some modes. But Anthropic Claude’s reasoning lacks formal integration, and only their misalignment research touches on logic checks. And Meta’s Llama experiments with logic engines in papers, but it’s not yet a core feature, whilst Grok’s xAI seemingly are doing comparatively jack **** here, at least AFAICT.

    So there’s a long way to go.

    I think it’s a case of integrate formal logic into outputs or die out there.

    Models are commodities, but no one buys dodgy petrol or steel.

    The question now is, can they integrate faster than their curve of end user disillusionment rises and also before their Capex runway ends?

    Again, as far as I can tell, the dark horse of the race, Google/Alphabet is in the lead to succeed.

  • 111 The Investor August 13, 2025, 1:58 pm

    @DH — Well firstly, I don’t think anybody can criticise, or at least most people don’t. (See also Brexit!) Also, to be fair to Marcus he’s put forward his (proper not LLM) reasoning plus LLM type hybrid solution many times in the past.

    Secondly, I’d argue the $155bn US big tech has already spent in AI capex is the ‘solution’. Set against that, a rebuttal on a blog doesn’t seem like piling on, exactly… 😉

    Interesting piece, cheers for adding to the tally.

  • 112 Delta Hedge August 13, 2025, 2:33 pm

    Yeah, but he needs to spell out exactly how the alternative hybrid approach would actually work in practice.

    A real road map, with full particulars.

    Of course he’s right. But it’s not enough to just be right here IMO.

    And, as you say, it’s big stakes time.

    I’m seeing $102.5 bn data centre spend just in the last quarter, and that’s without including Oracle, who’ve just stepped into contention:

    https://www.thedigitalspeaker.com/silicon-valley-just-spent-102-5-billion-on-concrete-and-steel-the-age-of-code-is-dead/

    $102,500,000,000 from just 4 companies in 13 weeks (FFS!)

    With Oracle included, the AI Capex run rate already looks like around half a trillion p.a.: enough, perhaps, to keep the US in growth, in spite of the tariff hit, and well on the way to making that projected spend of $900 bn in 2028 (might even turn out to be an underestimate, given the speed of spending increases now).

  • 113 The Investor August 13, 2025, 4:07 pm

    @DH — Well if he could spell it out exactly I imagine he’d be doing it and we wouldn’t be having this discussion.

    Also, Marcus is not the one spending $155bn already in 2025. I don’t think the onus is on his side of the argument to prove anything… 😉

  • 114 Delta Hedge August 13, 2025, 5:20 pm

    Of course your right that the burden is on the hyperscalars spending the apocryphal wealth of Croesus on LLM only (or predominantly LLM) approaches (plus on Stuart Russell and Geoffrey Hilton, who’ve inspired them), and not on Gary Marcus.

    No dispute there.

    As you say, how can Gary (much like we Remainers with Brexit) ever prove a counterfactual?

    He can only point to the serious LLM shortcomings on accuracy, reliability and robustness (just like we can only point to the Leave disaster, which is the factual).

    Given what’s at stake here – which is basically now one of:
    – human flourishing under effective and benign/aligned ASI
    – human loss of control/extinction risk under misaligned ASI
    – no ASI or even AGI, at least for the foreseeable future, and a recession inducing stock market crash and AI winter
    I just feel that the whole ML/AI field now needs to come together, pool ideas and resources, and engage in constructive collaboration.

    Noone’s done AI before. There is no map of the landscape. Therefore, noone has a monopoly on wisdom or insight, on either side of the debate within the field.

    Back at #92 above I linked to Hilton’s 23/07 talk to the Royal Institution. He’s a smart guy for sure.

    But his disparagement in that talk of formal logic and programming approaches and also previously of Gary Marcus on his (Hilton’s) website (see links at the foot of his site):
    https://www.cs.toronto.edu/~hinton/
    is frankly a bit dispiriting as it illustrates that there’s closed mindedness and/or unconstructive (and for the moment mostly unsupported) conviction and certainty on all sides in this novel, previously untested, but potentially hugely impactful, field.

    I’m of that view that the whole ML/AI field should now be put under government control and direction – like the Manhattan project or Apollo programme.

    Not nationalisation without compo, but rather state directed with reward sharing (by pro rata contributions) as between investors and the state. For the good of all.

  • 115 Delta Hedge August 13, 2025, 9:24 pm

    Largest AI firms’ Capex has increased to 1.3x EBITDA on a trailing 12 months’ basis, compared to a 0.5x average for all Capex for the rest of the 100 largest companies in the S&P (Bloomberg). If the Mag. 7 are no longer going to be Capex light, with FCF returning to shareholders in buybacks, or held as cash, then valuations rest only on future growth.

    Manchester and London IT (14% discount to NAV) currently holding >90% in biggest US AI plays (39% in Nvidia and 23% in Microsoft for Open AI and Azure cloud). That’s a high conviction allocation if ever I saw one. Puts even likes of PCT, SMT and ATT in the shade!

  • 116 Delta Hedge August 18, 2025, 9:53 am

    “Recall that annual growth rates in world GDP were less than a hundredth of a percent in the stone age, a fraction of a percent in the agricultural age, and single-digit percentage points in the industrial age. If this pattern continues, a fourth age would eventually produce sustained double-digit growth, meaning a world economy doubling time measured in years” John Crawford. Part 8 of the Techno Humanist manifesto.

    “AI’s future hinges on generating real revenue. While some companies will succeed, others relying on VC subsidies will prove unsustainable. Many coding assistants currently lose money per prompt. As an example, Big Tech spent over $100B in Q2 on data center assets that have a 4-6 year useful life, OpenAI had an estimated $5B of annual revenue in 2024”.(@nickbaumann_ lightly edited)

    “Railroads, once an innovative technology, comprised 63% of the US stock market in 1881 before collapsing.” (BofA)

    Everyone’s hyped on AI as if it’s the only thing driving growth, but Michael Green drops a striking stat:

    AI-related Capex added more to U.S. GDP growth last quarter than consumer spending.

    That’s huge… but also a sign of fragility.

    Strip away AI and the picture is bleak: private fixed investment (incl. housing) is flat since 2018 and barely above 2007 levels, even though the U.S. population is up 25%.

    The problem is as old as markets: canals in the 1820s, railroads in the 1850s, autos/electrification in the 1900s, fiber optics in the 1990s. First movers pour in, only to get leapfrogged by faster, cheaper tech.

    Think Nvidia: Company A buys H100s at $30K in 2024, locked into $6K/year depreciation. Company B waits until 2026, buys H200s at $15K with 2× the performance, and only $3K/year depreciation. Same product (compute), but B’s marginal cost sets the price, leaving A underwater.

    The telecom bubble shows just how brutal this can all get.

    In the late 90s, firms assumed bandwidth would stay at “hundreds of dollars per Mbps.”

    By 2002, prices were down 95%.

    By today, wholesale bandwidth is far below $1/Mbps — a 99.9% collapse from peak.

    Telecom revenues tell the same story: from $235B in 2000 to just $118B by 2022.

    The internet was a success story; investors in the overbuild were not.

    AI may transform everything. But the lesson of history is clear: technology’s utility doesn’t guarantee investor returns. Overcapacity, rapid obsolescence, and collapsing unit costs can make the “next big thing” look like a mirage for anyone betting too early.

  • 117 Delta Hedge August 19, 2025, 5:41 am

    MS just estimated LLMs could deliver $920 bn in annual net benefits to S&P 500 firms, with ~$3 tn in LLM-related capital expenditures needed to realize these gains. Over time, this investment could generate $13–16 tn in new market value—a 24–29% boost to the index’s capitalisation. Sectors such as staples, retail, real estate, healthcare, and transportation may see savings that exceed anticipated 2026 pretax profits. The bank estimates about 90% of jobs will be affected to one degree or another by automation or by augmentation, mainly to enhance rather than replace roles, and less on the ’embodied AI’ front (robots) at this stage.

  • 118 Ducknald Don August 19, 2025, 9:24 am

    I would take what Microsoft has to say on this with a pinch of salt. They have a clear incentive to hype it up.

Leave a Comment