≡ Menu

Weekend reading: First they came for the call centres

Our Weekend Reading logo

What caught my eye this week.

Bad news! Not only are the machines now coming from our cushy brain-based desk jobs, but our best response will be to hug it out.

At least that’s one takeaway from a report in the Financial Times this week on what kinds of jobs have done well as workplaces have become ever more touchy-feely – and thus which will best survive any Artificial Intelligence takeover.

The FT article (no paywall) cites research showing that over the past 20 years:

…machines and global trade replaced rote tasks that could be coded and scripted, like punching holes in sheets of metal, routing telephone calls or transcribing doctor’s notes.

Work that was left catered to a narrow group of people with expertise and advanced training, such as doctors, software engineers or college professors, and armies of people who could do hands-on service work with little training, like manicurists, coffee baristas or bartenders.

This trend will continue as AI begins to climb the food chain. But the final outcome – as explored by the FT – remains an open question.

Will AI make our more mediocre workers more competent?

Or will it simply make more competent workers jobless?

Enter The Matrix

I’ve been including AI links in Weekend Reading for a couple of years now. Rarely to any comment from readers!

Yet I continue to feature them because – like the environmental issues – I think AI is sure to be pivotal in how our future prosperity plays out. For good or ill, and potentially overwhelming our personal financial plans.

The rapid advance of AI since 2016 had been a little side-interest for me, which I discussed elsewhere on the Web and with nerdy friends in real-life.

I’d been an optimist, albeit I used to tease my chums that it’d soon do them out of a coding job (whilst also simultaneously being far too optimistic about the imminent arrival of self-driving cars.)

But the arrival of ChatGPT was a step-change. AI risks now looked existential. Both at the highest level – the Terminator scenario – and at the more prosaic end, where it might just do us all out of gainful employment.

True, as the AI researchers have basically told us (see The Atlantic link below) there’s not much we can do about it anyway.

The Large Language Models driving today’s advances in AI may cap out soon due to energy constraints, or they may be the seeds of a super-intelligence. But nobody can stop progress.

What we must all appreciate though is that something is happening.

It’s not hype. Or at least for sure the spending isn’t.

Ex Machina

Anyone who was around in the 1990s will remember how business suddenly got religion at the end of that decade about the Internet.

This is now happening with AI:

Source: TKer

And it’s not only talk, there’s massive spending behind it:

Source: TKer

I’ve been playing with a theory that one reason the so-called ‘hyper-scalers’ – basically the FAANGs that don’t make cars, so Amazon, Google, Facebook et al – and other US tech giants are so profitable despite their size, continued growth, and 2022-2023 layoffs, is because they have been first to deploy AI in force.

If that’s true it could be an ominous sign for workers – but positive for productivity and profit margins.

Recent results from Facebook (aka Meta) put hole in this thesis, however. The spending and investment is there. But management couldn’t point to much in the way of a return. Except perhaps the renewed lethality of its ad-targeting algorithms, despite Apple and Google having crimped the use of cookies.

Blade stunner

For now the one company we can be sure is making unbelievable profits from AI is the chipmaker Nvidia:

Source: Axios

Which further begs the question of whether far from being overvalued, the US tech giants are still must-owns as AI rolls out across the corporate world.

If so, the silver lining to their dominance in the indices is most passive investors have a chunky exposure to them anyway. Global tracker ETFs are now about two-thirds in US stocks. And the US indices are heavily tech-orientated.

But should active investors try to up that allocation still further?

In thinking about this, it’s hard not to return to where I started: the Dotcom boom. Which of course ended in a bust.

John Reckenthaler of Morningstar had a similar thought. And so he went back to see what happened to a Dotcom enthusiast who went-all in on that tech boom in 1999.

Not surprisingly given the tech market meltdown that began scarcely 12 months later, the long-term results are not pretty. Bad, in fact, if you didn’t happen to buy and hold Amazon, as it was one of the few Dotcoms that ultimately delivered the goods.

Without Amazon you lagged the market, though you did beat inflation.

And yet the Internet has ended up all around us. It really did change our world.

Thematic investing is hard!

I wouldn’t want to be without exposure to tech stocks, given how everything is up in the air. Better I own the robots than someone else if they’re really coming for my job.

But beware being too human in your over-enthusiasm when it comes to your portfolio.

The game has barely begun and we don’t yet know who will win or lose. The Dotcom crash taught us that, at least.

Have a great weekend!

From Monevator

Does gold improve portfolio returns? – Monevator [Members]

How a mortgage hedges against inflation – Monevator

From the archive-ator: How gold is taxed – Monevator

News

Note: Some links are Google search results – in PC/desktop view click through to read the article. Try privacy/incognito mode to avoid cookies. Consider subscribing to sites you visit a lot.

UK inflation rate falls to lowest level in almost three years – BBC

Energy price cap will drop by 7% from July [to £1,568]Ofgem

House prices are modestly rising, driven by 17% annual spike in new build values – T.I.M.

Hargreaves Lansdown rejects £4.7bn takeover approach – This Is Money

Judge: Craig Wright forged documents on ‘grand scale’ to support Bitcoin lie – Ars Technica

FCA boss threatens private equity with regulator clampdown – CityAM

Sunak says it’s 4th July, in the rain, against a subversive soundtrack [Iconic]YouTube

Sir Jim Ratcliffe scolds Tories over handling of economy and immigration after Brexit – Sky

No, it’s not all the Tories’ fault… but Sunak and Hunt were too little, too late – Bloomberg

Products and services

Pay attention to catches as well as carrots when switching bank accounts – Guardian

Which energy firm offers the cheapest way to get a heat pump? – T.I.M.

How to get the most from second-hand charity shops – Which

Get £200 cashback with an Interactive Investor SIPP. New customers only. Minimum £15,000 account size. Terms apply – Interactive Investor

Nine out of ten savings accounts now beat inflation – This Is Money

Problems when transferring a cash ISA – Be Clever With Your Cash

Nationwide launches a trio of member deals worth up to £300 – Which

Transfer your ISA to InvestEngine by 31 May and you could get up to £2,500 as a cashback bonus (T&Cs apply. Capital at risk) – InvestEngine

Seven sneaky clauses in estate agent contracts that can cost you dear – This Is Money

Halifax Reward multiple account hack: worth up to £360 a year – Be Clever With Your Cash

Hidden homes in England and Wales for sale, in pictures – Guardian

Comment and opinion

No, the stock market is not rigged against the little guy – A.W.O.C.S.

The life hedge… – We’re Gonna Get Those Bastards

…is easier said than implemented [US, nerdy]Random Roger

Checking out a fake Ray Dalio Instagram investing scam – Sherwood

An open letter to Vanguard’s new CEO – Echo Beach

If you look past the headlines, London is charging ahead – CityAM

Most of us have too much in bonds [Search result]FT

Why we still believe in gold – Unherd

Are ‘fallen angel’ high-yield bonds the last free lunch in investing? – Morningstar

For love or money – Humble Dollar

Naughty corner: Active antics

Fund manager warns putting £20k in the US now will [possibly!] lose you almost £8k – Trustnet

A deep dive into US inflation, interest rates, and the US economy – Calafia Beach Pundit

A tool for testing investor confidence – Behavioural Investment

When to use covered call options – Fortunes & Frictions

Valuing Close Brothers after the dividend suspension – UK Dividend Stocks

Meme stock mania has entered its postmodern phase [I’m editorialising!]Sherwood

Kindle book bargains

Bust?: Saving the Economy, Democracy, and Our Sanity by Robert Peston – £0.99 on Kindle

Number Go Up by Zeke Faux – £0.99 on Kindle

How to Own the World by Andrew Craig – £0.99 on Kindle

The Great Post Office Scandal by Nick Wallis – £0.99 on Kindle

Environmental factors

Taking the temperature of your green portfolio [Search result]FT

The Himalayan village forced to relocate – BBC

‘Never-ending’ UK rain made 10 times more likely by climate crisis, study says – Guardian

So long triploids, hello creamy oysters – Hakai

Robot overlord roundup

We’ll need a universal basic income: AI ‘godfather’ – BBC

Google’s AI search results are already getting ads – The Verge

AI engineer pay hits $300,000 in the US – Sherwood

With the ScarJo rift, OpenAI just gave the entire game away – The Atlantic [h/t Abnormal Returns]

Perspective mini-special

How much is a memory worth? – Mike Troxell

We are all surrounded by immense wealth – Raptitude

How to blow up your portfolio in six minutes – A Teachable Moment

My death odyssey – Humble Dollar

Off our beat

The ultimate life coach – Mr Money Mustache

How to cultivate taste in the age of algorithms – Behavioural Scientist

Trump scams the people who trust him – Slow Boring

Buying London is grotesque TV, but it reflects the capital’s property market – Guardian

The algorithmic radicalisation of Taylor Swift – The Atlantic via MSN

And finally…

“Three simple rules – pay less, diversify more and be contrarian – will serve almost everyone well.”
– John Kay, The Long and the Short of It

Like these links? Subscribe to get them every Friday. Note this article includes affiliate links, such as from Amazon and Interactive Investor.

{ 37 comments… add one }
  • 1 Marco May 25, 2024, 11:52 am

    Yay, I actually guessed that the “bearish on US equity” fund manager was Hussman before reading the article. The guy has made a living out of being an uber perma bear for the last 15 years. He’s probably due a right call?

  • 2 Paul_a38 May 25, 2024, 12:30 pm

    Thanks for the article, thought you might have the holiday weekend off.
    OK, ESG has exhausted it’s sellside utility it seems, now replaced by AI. Can’t get excited, it’s still only software and I reckon it will choke on its own errors as they cascade. Big problem will be legal, how to have a known assured trail untainted by AI. So AI will be a source of data pollution which may cause a few steps backward until The Purge.
    As for the Internet, a bunch of wires and dumb computers until along came the search engine doing what Morse did for the telegraph.

  • 3 dearieme May 25, 2024, 12:42 pm

    Decades ago I worked around the corner from a university department of Machine Intelligence, as AI was then called. The prima donnas who ran the department promised imminent society-changing revolution.

    I had a beer with one of their bright young men. He told me there were two deep problems with it all. (i) They didn’t know how to make computers emulate human decision-making. (ii) They didn’t know if it would be sensible even to try to emulate human decision-making.

    Much more computing power is available nowadays but are there yet answers to his two points?

  • 4 xxd09 May 25, 2024, 12:46 pm

    In the distant past during my long learning evolution as an investor J.Hussman appeared on my investing information radar and I read his posts for some time.They were a rather wonderful antidote to the eternal optimists who far outnumbered him at that time
    I could never understand how such a depressing investing outlook could appeal to so many punters but it did and does as he is still in business
    He must appeal to the many “end of the world “ guys and gals of whom we do seem to have a preponderance of at the moment-climate change,rising Co2 levels,rising sea levels (a change from the Bomb when I was a boy) etc etc
    Re AI takeovers -I see 2 models .AI for the mundane routine procedures of living but as social human beings we like to smell and interact with others of our species and that sort of particular service model cannot be duplicated unless you prefer robots
    I notice small retailers in my area still apparently making a living against the Tesco,s etc using personal service,polite helpful staff ie humanoid social interaction as a selling point-seems to wrk-so far!
    xxd09

  • 5 ermine May 25, 2024, 1:19 pm

    Re AI, well, it sure ain’t improving Web search any. At the moment AI seems to be busy enshittifying much. Save us from AI ‘art’ in the hands of tyros, this is one bunch of mediocre workers not being made more competent.

    While the dotcom times spring to mind, perhaps that’s recency bias. We’ve seen this movie before, with the railways and possibly electricity. It’s hardly as if electrickery, trains and t’internet have died out in common usage, but the investors were the fall guys for sussing out which ideas worked and which didn’t.

    Many are called, few are chosen 😉

    Ah, AI index funds you say? I used to hold TMT back in them dotcom days. Not TMT investments, but these guys, the iShares TMT ETF, which was later eradicated on the QT during the bust. Who did well out of the dot-com boom? Our old friend Warren Buffett, who studiously ignored it all and bought non dotcommy stuff while everyone was chasing lastminute.com

  • 6 xalion May 25, 2024, 1:41 pm

    GPUs are expensive to buy and the power cost of processing a query using one of the language models is a lot higher than a simple Google search. Unless the business model allows more $ to be received as a result, the incremental impact to profit margins & returns on capital is negative. There are competitive pressures forcing tech companies down this road, not sure it makes them better investments.

    There’s a Buffett anecdote to the effect that everyone at sports match starts seated, the front row stands up to get a better view, causing a cascade effect for the rows behind, the end result being that no one can see any better, but all are much less comfortable.

  • 7 xalion May 25, 2024, 1:48 pm

    It’s also rare to find relative value in an area which has attracted lots of excitement from investors who have bid up the prices, without having much clue to who the winners will be and whether the prize is boobytrapped.

  • 8 Mei May 25, 2024, 5:21 pm

    Interesting article. I do use Gemini (google chatGPT) but only for limited examples such as writing a letter. Any thing that contain technical detail should be checked by human. For example, the math problem generated to educate a kid has errors in it. I doubt it could be used for anything serious in near future.

    I guess to buy Nivida or not is another question. It’s speculation.

  • 9 Ben May 25, 2024, 6:05 pm

    There’s definitely potential in AI, as the Aphafold protein structure project by Deepmind shows. But there is also a vast element of hype clouding the picture – like dot-com, Blockchain, ETFs… A big crash will come before we know where the real potential is. LLMs are impressive but unreliable and stupid, perhaps permanently so

  • 10 Ben May 25, 2024, 7:12 pm

    Scuse typo – NFTs, not ETFs

  • 11 Delta Hedge May 25, 2024, 10:31 pm

    Three different comments if I may:

    1. On @TI’s question: “But should active investors try to up that allocation still further?”: I’d suggest thinking in bets. It could be different this time, but (I’d guess) on 80% of occasions it isn’t, and mean reversion occurs. But you can bet on both horses by ‘barbelling’ the portfolio. Or at least you could if we were in the US and had access to their ETF universe (boo hiss to PRIIPs reg’s and MIFID which stop us getting access to these products here).

    Between them the WisdomTree Global Megatrends Equity index US ETF and the Invesco S&P 500 Momentum US ETF, SPYO, would enable one to concentrate the half of the bet going into the ‘AI wins’ scenario; whilst the other half goes into an ‘AI loses’ allocation to ex-US developed market and Emerging Market Small Cap Value (i.e. the opposite).

    The SPYO ETF is especially interesting here as it solely comprises those S&P 500 constituents with the highest risk-adjusted price return momentum scores adjusted by their cap weights.

    Constituents are weighted by the product of market cap and momentum score, subject to constraints; namely that the maximum weight of each security is the lower of 9% and 3 times its market cap share in the S&P 500.

    When the index is concentrating into the biggest names it becomes very hard indeed to beat it with any form of active stock selection.

    When £1 is invested in the S&P 500 index, 35p now flows into the top 10 stocks. The remaining 65p is divided between the remaining 490 stocks. So investors risk failing unless 35% of their portfolio is invested in those 10 stocks.

    Bill Miller from Legg Mason and Peter Lynch at Fidelity each achieved 15 years beating the S&P 500. Then their strategies ran into headwinds and stopped working. But markets were less efficient and less concentrated then. According to one calculation, at the current concentration levels in the top 10 stocks, basically the hyper-scalers which @TI refers to above; the probability of an active manager (before any fees and transaction costs) beating the S&P 500 for 15 consecutive years after doing so in the first year, is now just 1 in 2.3 million

    2. @ermine #5: the regular Google search product is now irredeemably enshitified for sure, but their Perplexity AI does a pretty decent job as a universal summariser, and better I’d say than Chat GPT 3.5 (comparing free with free versions), even if it’s no form of any type of intelligence; just a better ‘dumb rock’ stochastic parrot. Here’s the answer it gave me to the query as to what effect a declining and aging population would have on both the equity risk premium and the risk free rate: “Based upon the provided search results, a declining and aging population is expected to increase the equity risk premium but decrease risk-free rates. The key points from the sources are: As the population ages, there will be a larger proportion of older, risk-averse investors shifting their portfolios towards safer assets like bonds, increasing demand for risk-free government bonds. This increased demand for bonds will drive down the risk-free rate of return. However, the increased risk aversion and lower risk tolerance of an aging population will reduce overall demand for risky assets like equities. Lower demand for equities will decrease equity prices and increase the expected equity risk premium. Specifically, some sources predict the equity risk premium will increase by around 0.28 percentage points due to demographic shifts, while the risk-free rate will decrease by around 1.5 percentage points. Sources also link population aging to a decline in safe real interest rates (risk-free rate) but a rise in equity risk premia. So in summary, the consensus from the provided literature is that an aging population will put upward pressure on the equity risk premium by reducing demand for risky assets, while simultaneously depressing the risk-free rate by increasing demand for safer government bonds.” I thought that was a pretty good first attempt answer for a machine.

    3. The Tony Isola’s ‘A Teachable Moment’ piece in the Weekend links perpetuates a common misconception in his erroneous statement: “This capacity is enough to obliterate the planet several times over”. We are not six minutes away from the extinction of the human species, and we never have been. Dr. Brian Martin is a peace and disarmament activist in Australia, but also a social scientist commited to accuracy. Amongst many, he has pointed out that, whilst the effects would be quite horrifically devastating to the combatant countries, they didn’t (and still don’t) threaten human extinction. Here he is writing at the height of the Cold War in December 1982, when there were several times the number of weapons (both stockpiled and readied on launch on warning) as compared to now:

    https://www.bmartin.cc/pubs/82cab/index.html

    And this is a credible worst possible case scenario (from an alternative history perspective) written in 2003, and set in August 1988, at just about the worst possible time for an exchange to take place. It’s bad, but no extinction risk.

    https://www.johnstonsarchive.net/nuclear/nuclearwar1.html

  • 12 ermine May 25, 2024, 11:48 pm

    @Delta Hedge #11 I will preface the following with the fact I am an old git and perhaps resistant to change, but I replicated your query on perplexity re ageing and got the same result, which is nice. Do I feel I have learned something? Not really, it’s basically the outcome what was coded in the old rule of thumb take your age from 100, invest in equities, rest in bonds. Extrapolate that with supply and demand, job done.

    I asked it to tell me about stoats, and I would have been far better off with the Wikipedia entry, perplexity also favoured the negative press from New Zealand. Seriously New Zealanders, the most invasive species to NZ ain’t got four legs. And compared to the rest of the world colonised by mustelids, NZ is a pimple.

    Perplexity’s got the same problem as AI art. It looks impressive – take this William Morris pastche, but it reeks, in a curiously undefinable way that I have gotten to hate over the last three months. And I was an engineer, I have virtually zero artistic talent but I can see what’s wrong. If an article has a banner pic that’s AI I don’t bother to read it.

    I’m sure it will improve, and we will learn better to use it properly, Edison’s cylinder phonograph and my hifi are a long way apart too. The protein folding stuff is amazing, a genuine advance that we may be grateful for with new drugs. But above all else, in the information space current AI is unsatisfying and a major pollutant.

    The essential problem seems to be that it’s artificial, but it’s not intelligent. And it seems to make a particular type of human’s brain fall out in the admiration of what it can do that we can’t without acknowledging the converse – it’s not superhuman and it’s causing us to devalue what is human.

  • 13 Ducknald Don May 26, 2024, 1:31 pm

    Nice to see the likes of Jim Ratcliffe still can’t bring themselves to say they were wrong.

    On the subject of AI it will be interesting to see if the results can improve without the energy costs going through the roof. I’m impressed with what I’ve seen so far but sceptical of the overall benefits, in particular because it’s big tech that seems most likely to reap the rewards.

  • 14 Boltt May 26, 2024, 1:46 pm

    The 2 most impressive AI things I’ve read are:

    1- identifying sex (not gender) from IRIS 99.88% accuracy

    2- identify different fingerprints as being from same person 77% accuarcy

    Although I only just found out it wasn’t 100%, but clever either way

  • 15 Delta Hedge May 26, 2024, 3:29 pm

    @ermine #12: Gary Marcus’ Substack is a good place to get some constructive informed skepticism about AGI/ASI generally, and about LLMs in particular.

    The big questions for me here are:

    a). Is ASI merely difficult but, in principle, within reach (whether over long or, less plausibly, short time scales)?

    Or:

    b). Is ASI just a dream, akin to wishing for magic, where physical impossibility meets the human need to imagine something lying beyond the possible, like each of:

    – Backwards in time travel (e.g. Tipler cylinders):

    https://en.m.wikipedia.org/wiki/Tipler_cylinder

    – Faster than light travel (e.g. Alcubierre drive):

    https://youtu.be/SBBWJ_c8piM?si=BlR3ze8en6tdEp-G

    – FTL communication (e.g. using quantum entanglement):

    https://youtu.be/BLqk7uaENAY?feature=shared

    If ASI & AGI are phantasms of imagination and outside the realm of the possible, like each of the above examples are, then anything more than a zero allocation to their commercial realisation would be excessive.

    But if AGI and (ultimately perhaps even) ASI are merely very hard, but not actually impossible to achieve (notwithstanding many incremental S curves of break through & adoption might be required over a long time rather than rapidly reaching a much hyped technological singularity); then there is at least some reason behind the current surge in investment linked to trying to realise these goals.

    However, even then, the possibility of disappointments and delays would still be very substantial indeed. As with the TMT bubble of 1995-1999, even where a technology does ultimately more or less deliver as originally promised, the value of the companies which were built upon it can still crash miserably in the near to medium term if the pace of progress falls behind inflating and accelerating investor expectations.

  • 16 Alan S May 26, 2024, 6:39 pm

    @Delta Hedge (#11) – comment 3

    Interesting recent analysis of the effects of nuclear war in Nature (https://www.nature.com/articles/s43016-022-00573-0)

    Not pretty reading – the 5 Billion estimate (after a large exchange) is about 60% of the world’s population.

    Declassified estimates of casualty rates from the 1950s-1970s can be found at https://thebulletin.org/2023/01/cold-war-estimates-of-deaths-in-nuclear-conflict/ and seem to lie around the 50% mark in preemptive attacks or after 30 days. Of course, these are rates in the countries involved.

  • 17 Delta Hedge May 26, 2024, 11:05 pm

    Thanks @Alan S #16.

    The 5 bn figure relies on a full blown, very long lasting and very severe global nuclear winter. Without that it’s topping out at a loss of 360 mn people (which is of course horrific) or 4.5% of the current world population of 8 bn.

    To clarify, I don’t necessarily disagree with the possibility of global nuclear winter, nor with the general thrust of the concern expressed in Annie Jacobsen’s book, which Tony Isola references (and which I’ve read, cheapskate that I am, pursuing it whilst in Waterstones 😉 )

    Similar to last week’s Weekend reading comment #14 by @BarryGevenson that, “90% of life on this planet will be dead in 150 years”; my only objection here is on the factual inaccuracy of Tony Isola’s statement that: “This capacity is enough to obliterate the planet several times over.” That’s a strong, emphatic but incorrect claim.

    In fairness, he’s an excellent financial blogger, and he’s relying here on Jacobsen’s otherwise superbly presented and quite credible book, but also one which seems to me to vere off right at the end and – after outlining a well researched, well crafted and detailed scenario – then appears to go hyperbolic in its concluding pages and appear to suggest that much of the world would be uninhabitable to humans for 25,000 years.

    There’s 40 years of controversy here (starting in 1982 with “Nuclear War: The Aftermath” in Ambio, published by on behalf of Royal Swedish Academy of Sciences); but not even the most ardent advocates for taking the severity of nuclear winter seriously, and not even the most severe models, predict human extinction – except as the most vanishing remote possibility.

    As might be expected, in recent years the EA and LessWrong community has been active in both quantitatively probing the models and in reassessing the risks within numerical parameters, see as examples:

    https://forum.effectivealtruism.org/posts/pbMfYGjBqrhmmmDSo/nuclear-winter-reviewing-the-evidence-the-complexities-and

    https://forum.effectivealtruism.org/posts/6KNSCxsTAh7wCoHko/nuclear-war-tail-risk-has-been-exaggerated

    https://www.lesswrong.com/posts/sT6NxFxso6Z9xjS7o/nuclear-war-is-unlikely-to-cause-human-extinction

    The second and third of the above respectively note of the Robock study (which kicked off the modern, post Cold War, series of models on this subject):

    – “Luke Oman, one of the 3 authors of Robock 2007, having guessed a risk of human extinction of 0.001 % to 0.01 % for an injection of soot into the stratosphere of 150 Tg.” [150 teragrammes of soot being the worst case in an all out exchange in Robock’s already very pessimistic study].

    – “Carl Shulman asked one of the authors of this paper, Luke Oman, his probability that the 150Tg nuclear winter scenario discussed in the paper would result in human extinction, the answer he gave was “in the range of 1 in 10,000 to 1 in 100,000.””

    The actual reasoning of Luke Oman here – as one of the most prominent advocates of the possibility of severe nuclear winter – is then set out in his Q&A at:

    https://www.overcomingbias.com/p/nuclear-winter-and-human-extinction-qa-with-luke-omanhtml

    Human extinction risk is the existential dread which Tony Isola seems to fear in his piece, which is linked to in this week’s Weekend reading.

    But, whilst the loss of (at most) between 360 mn and 5 bn lives amongst the 8 bn human living would be an unimaginable tragedy and an unprecedented disaster; it would not be extinction.

    Extinction forecloses the lives of everyone who might otherwise live. That could be a lot of people.

    If you very conservatively assume a future average human (and human descended) population size of 1 bn people (i.e. only an eighth of the current world population size) with typical lifespans of a century, and then allow that the Earth will remain habitable for between 500 mn to 1.3 bn years but that natural mass extinction level events seem to occur every 100 mn to 500 mn years, then extinction now would foreclose the possibility of at least a quadrillion (i.e. a 1,000 tn) future human lives.

    This is why the loss of all 8 bn people alive now is likely to be at least a million times worse than the loss of 7 bn out of 8 bn people alive, and not just 14% worse.

    Fortunately, what Tony Isola seems to fear in his piece, namely actual extinction of the human species, just isn’t going to arise out of this particular risk vector.

    And there are plenty of reasonable grounds to doubt whether even an non-extinction level global nuclear winter scenario would eventuate.

    In 1991 it was claimed that the Kuwaiti oil well fires might cause a global winter and lead to famine in Asia (Peter Aldous, January 10, 1991, “Oil-well climate catastrophe”, Nature, 349 (6305): 96, “The fears expressed last week centred around the cloud of soot that would result if Kuwait’s oil wells were set alight by Iraqi forces … with effects similar to those of the “nuclear winter … Paul Crutzen, from the Max Planck Institute for Chemistry in Mainz, has produced some rough calculations which predict a cloud of soot covering half of the Northern Hemisphere within 100 days. Crutzen … estimates that temperatures beneath such a cloud could be reduced by 5–10 degrees C”). Those concerns turned out completely misplaced. There were only very localised and minimal cooling effects.

    With the best of intentions and belief, the late great Carl Sagan and his colleagues sought in the 1980s to draw attention to nuclear winter risk. Where it seems, based upon the empirical evidence that we now have the benefit of, that they probably went wrong was that they expected that a self-lofting of the sooty smoke would occur when it absorbed the sun’s heat radiation, whereby the black particles of soot would be heated by the sun and lofted higher and higher into the air, thereby injecting the soot into the stratosphere where it would take years for the sun blocking effect of this aerosol of soot to fall out of the air, and with that, catastrophic ground level cooling and agricultural impact. Instead it seems more likely now that this soot wouldn’t self loft to high enough altitude and instead would get fairly rapidly washed out by rainfall.

  • 18 Al Cam May 27, 2024, 10:22 am

    @Delta Hedge (#17):
    Re: “With the best of intentions and belief, …”
    A somewhat extreme example of the sensitivity of a model to the underlying assumptions! Thanks for the info.

  • 19 Alan S May 27, 2024, 10:24 am

    @Delta Hedge (#17). Thanks – there’s some interesting reading in the links you’ve given there. Let’s hope the calculations remain theoretical.

    So, to stay at least vaguely on topic (and very much tongue in cheek) – would bonds, equities, or commodities do best during ‘nuclear winter’?

    I suspect “Similar to last week’s Weekend reading comment #14 by @BarryGevenson that, “90% of life on this planet will be dead in 150 years”
    was referring to the potential outcomes of climate change where currently about 250k additional human deaths per year are predicted (WHO) in areas likely to be particularly hard hit. For other species, potential extinction rates have large error bars (for comparison, there was about 75% species loss when the dinosaurs got ‘zapped’, but that was a bigger event).

  • 20 Marked May 27, 2024, 11:14 am

    So N in FAANG replaces Netflix with Nvidia?

    After results this week (2.3Trn market cap prior to results) the day later added near $250Bn – more than UK’s biggest company…. in a day!

    Comes back to company is worth what a person is prepared to pay. That mid 70’s profit margin must come under attack soon you’d hope.

  • 21 Delta Hedge May 27, 2024, 11:30 am

    @Alan S #19: “would bonds, equities or commodities do best”: Benzinga markets says that, come the AI or other apocalypse, invest in a lifestraw, and not gold or BTC:
    https://www.benzinga.com/markets/cryptocurrency/24/05/38821153/bitcoin-and-gold-wont-save-you

    James Altucher has written the book on crisis investing: “The Wall Street Journal Guide to Investing in the Apocalypse: Make Money by Seeing Opportunity Where Others See Peril (Wall Street Journal Guides)”:
    https://www.amazon.com/exec/obidos/ASIN/0062001329/thebigpictu09-20

    And Michael Batnick at Ritholtz Wealth Management says to just carry on as normal 😉 :
    https://www.theirrelevantinvestor.com/p/the-thing-that-doesnt-mix-well-with-investing

  • 22 Al Cam May 28, 2024, 6:38 am

    @Alan S (#19):
    Re: “would bonds, equities or commodities do best during ‘nuclear winter’?”

    William Bernstein concedes there is not much most people can do to protect against confiscation and devastation risks ‘beyond [having] an interstellar spacecraft’. Maybe this is what really motivates Musk, Bezos, Branson, etc see e.g. https://spaceimpulse.com/2023/03/09/new-space-companies/

  • 23 weenie May 28, 2024, 10:47 am

    Interesting to read about covered call options – when done properly, they are indeed the right tool for the right investor and I personally know at least one person who lives off his options trading income.

    My own foray into options trading has me ‘technically’ selling covered put options, as opposed to the selling covered call options strategy explained in the article – it’s just the flip side and it’s currently working for me.

    Probably still too soon to say it’s the right tool for the right investor in my case though!

  • 24 Delta Hedge May 28, 2024, 11:51 am

    Superb point @weenie (#23).

    Linking your thoughts above to both the “life hedge” (WGGTB) and the Random Roger pieces in the links: I wonder if:

    – for an investor coming towards retirement & wanting to derisk;
    – could they sell covered calls on their equity holdings (say a global equity tracker); and,
    – instead of going immediately into the types of counter equity cycle investments that Random Roger covers that are meant to rise a bit when equities fall;
    – they could use the sale proceeds of the call premia received to buy OTM puts on the same global equity index;
    – so that if the worse happens, and equities plunge, then their ‘crash insurance’ is paid for by selling the calls.

    In this scenario, if equity markets surge, and the calls get called, then the investor just sells the equity portfolio to the call option buyer at the strike price, which fits in quite well with the investor derisking from equities to either (or both) of bonds and/or the alternative types of investments that were covered by Random Roger last week.

  • 25 Delta Hedge May 31, 2024, 6:19 pm

    Nice summary here from a Wharton Business School Prof. of 4 potential AI economic pathways and of LLM’s modus operandi:

    https://youtu.be/d4f1jqb3Yis?feature=shared

    Meanwhile AI cheerleader in chief and technological singularity soothsayer Ray Kurzweil has a soon to be published follow up to his 2005 (slightly bonkers) foundational AI/AGI/ASI classic “The Singularity Is Near:

    https://www.penguin.co.uk/books/462759/the-singularity-is-nearer-by-kurzweil-ray/9781847928290

    And former Guardian writer, part time Buddhist, and Bali based digital nomad and sci-fi commentator Damian Walter has an intriguing (anarcho-capitalist, libertarian-socialist, hybrid mash up?) take on the potentials and perils of AI. The 2nd of these is quite long. I listen to these at 2x speed on YouTube. The key question in the second is from 7 to 14 minutes in.

    Is it going to be a utopia in the mould of Ian M Banks’ Culture; or instead either one of William Gibson’s, Philip K Dick’s or Aldous Huxley’s dystopias or, worst still, one of the worlds which Frank Herbert or George Orwell warned against:

    https://youtu.be/iVd1hPewcCw?feature=shared

    https://youtu.be/uGZW1xnkzkI?feature=shared

  • 26 Delta Hedge June 1, 2024, 9:28 am

    Also recommend this Prof. Stuart Russell talk on AI last month at the Neubauer Collegium at the University of Chicago:

    https://youtu.be/UvvdFZkhhqE?si=3MWUipNKCR-8ryVv

    And this interview with him, also from last month, at the Cal Alumni of UC Berkley:

    https://youtu.be/QEGjCcU0FLs?si=Ey2iw3JO8om3Jw0I

    Stuart Russell together with Prof. Geoffrey Hinton are the acknowledged ‘Godfathers of AI’. It’s fair to say that they’re both rather concerned on the safety front. There are a great many talks by each of them out there, but these two items seems to be the most recent in this very fast moving area by Prof. Russell.

  • 27 Delta Hedge June 14, 2024, 7:21 pm

    Interesting take on the use of AI in investing:

    https://www.telegraph.co.uk/business/2024/06/13/ai-better-investment-decisions-humans/

    And this one on why maybe the technological singularity really is near to hand:

    https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf

  • 28 The Investor June 14, 2024, 10:41 pm

    @Delta Hedge — Yes, the second link is a bit of a sobering read, isn’t it, particularly if one notches up its credibility a couple of ticks due to the author. It hit just when I was calming down about ChatGPT having had play with a bunch of induced errors the other day. Perhaps I’ll belatedly include it in the links this week.

  • 29 The Investor June 14, 2024, 11:31 pm

    p.s. On the other hand, one of my friends who works in AI says not so fast, sent me this link:

    https://www.youtube.com/watch?v=xm1B3Y3ypoE

  • 30 Delta Hedge June 20, 2024, 11:30 pm

    More sobering takes on an accelerated timeline from ML/LLM scaling to AGI to ASI:

    https://open.substack.com/pub/unchartedterritories/p/what-would-you-do-if-you-had-8-years

    tbh I don’t know whether to be terrified, anxious or dismissive.

    Instinctively, I favour the precautionary principle until potentially catastrophic tech/ideas are proven reasonably safe in all plausible scenarios.

    We now know that, for examples, that both GM foods and nuclear power are safe by all reasonable and fair minded definitions (not completely safe and absolutely free from adverse consequences to be sure; but that’s an unreasonable standard for technology conferring significant global benefits).

    ASI might turn out likewise to be safe for all practical purposes, if it does emerge soon.

    But until we actually know, we should tread very carefully.

    You wouldn’t rush to make an investment decision involving your entire portfolio. So too we should pause, reflect, test and assure.

  • 31 Delta Hedge July 4, 2024, 1:36 pm

    [Correction: my reference to the SPYO ETF at #11 should have been to the SPMO ETF]

  • 32 Delta Hedge July 17, 2024, 10:11 pm

    Latest astounding price target and market cap forecast for Nvidia – but it’s not one from Cathy Wood for a change. It’s from James Anderson, formerly of Scottish Mortgage fame:

    https://fortune.com/2024/07/16/nvidia-market-cap-50-trillion-investor-james-anderson-amazon-tesla/

  • 33 Delta Hedge July 27, 2024, 9:59 pm

    I’m posting this here because there’s some obvious implications for investors now if the LLM scaling to narrow AI to AGI to ASI race (and hype) is not merely trying to ascend a much steeper and far higher mountain than Open AI, Anthropic, xAI, Meta, Gemini et al are planning for; but, instead, is actually in a scenario where all the efforts are being directed at the wrong mountain and where the right mountain to climb is effectively impossibly tall and steep.

    So, for over 30 years Sir Roger Penrose, the 1988 Wolf Prize in Physics winner, which he shared with Stephen Hawking for the Penrose–Hawking singularity theorems, and the 2020 Nobel Prize in Physics laureate; has had a highly unorthodox and contentious conjecture about how consciousness arises in the brain.

    He thinks that consciousness could be a quantum process (orchestrated objective reduction) involving structures common in neurones called microtubules.

    Apart from the anaesthesiologist Stuart Hameroff, almost noone else has taken the idea seriously as it would require quantum superpositions in the microtubules in the warmth of the brain to be sustained for many orders of magnitude longer than the wave function collapse takes in the cold (near absolute zero temperature) environments of quantum computers for qubits with only a few particles (decoherence takes no more than an attosecond in non pristine environments, far quicker than any brain processes, like a neurone firings).

    There’s several other technical and fundamental objections to the idea and, so far at least, it just hasn’t garnered any enthusiasm from the physics, neurology or computation professions trying to understand the hard problem of consciousness.

    Anyway, there’s been something of a breakthrough recently covered very clearly here:

    https://youtu.be/xa2Kpkksf3k?feature=shared

    The upshot for AI research/development is that, if Penrose is right, any classically algorithmic process cannot lead to consciousness and that human like (or super human) intelligence cannot emerge from all current approaches to AI.

    And if the AI industry is barking up the wrong tree and (mixing metaphors here) trying to climb up the wrong mountain, then I’d guess that’s not great for investment in tech heavy indices right now.

  • 34 Delta Hedge July 28, 2024, 9:32 pm

    An excellent review of where we are with LLM and neural nets from Gary Marcus:

    https://open.substack.com/pub/garymarcus/p/alphaproof-alphageometry-chatgpt

    Could we be just 12 months from start of the next AI winter, like Gary thinks, or do we just need to have more ‘Situational Awareness’, like Leopold Aschenbrenner contends?

    Under a regret minimisation framework:
    – If Leopold is right, then FOMO could still be satisfied for the majority by having no more than 50% in US large caps, bearing in mind that many of the AI chasing firms are unlisted anyway.
    – And if Gary is right, then loss aversion for most people will still be somewhat assuaged by keeping the US large caps to a 50% limit.

  • 35 Delta Hedge July 29, 2024, 8:26 pm

    And here’s one today giving the funeral rites to Open AI:

    https://www.wheresyoured.at/to-serve-altman/

    I’m starting to wonder if this is like the summer of 1999, and the market goes pop after Christmas when uncomfortable realities about the ‘no show’ for an AI revolution set in.

    Still, at least it looks like Open AI has some revenues ($3.5 bn to $4.5 bn annualised). So there’s something for hope to grab hold of there.

  • 36 Delta Hedge August 30, 2024, 2:59 pm

    Nvidia now has a higher last 5 years’ return than in the 5 years after its IPO. Nuts. Contrast to BTC:

    https://open.substack.com/pub/ecoinometrics/p/bitcoin-diminishing-returns-and-being

  • 37 Delta Hedge September 10, 2024, 11:14 am

Leave a Comment