What caught my eye this week.
Robinhood is planning to launch a publicly traded fund to enable US investors to gain exposure to unlisted companies like SpaceX and Stripe.
It reminded me that this is one area where we UK investors actually have it better.
Similarly, the news of another soon-to-be listed venture fund in the US from an outfit called Powerlaw. It’s an investor in the likes of OpenAI and bleeding-edge weapons maker Anduril.
These kinds of risky but potentially revolutionary startups are meat and potatoes for Baillie Gifford, the Scottish manager that runs investment trusts like Scottish Mortgage and Edinburgh Worldwide.
And to my mind the investment trust structure is the ideal vehicle for holding private companies for the long term. It sidesteps the liquidity issues you inevitably get with open-ended funds that hold illiquid assets. And a trust’s transparency requirements and independent board of directors mean – at least in theory – extra safeguards for ordinary shareholders.
Ironically though, a big reason the investment trust sector has been under pressure for the past few years is precisely because some trusts have large holdings in unlisted companies!
Even storied RIT Capital still trades on a discount to net assets of over 25%, largely on account of its private holdings.
And this despite a track record of private investments previously delivering good returns for the fund.
Trusts worthy
The absolute amounts managed by such trusts is tiny in the grand scheme of things. Mighty Scottish Mortgage – by far the biggest – has an asset base of just £15bn. Many others – such as titchy Augmentum – manage only a fraction of that.
It wouldn’t take much new money flowing in for such trusts to grow. In an ideal world I think they would be gently expanding, not facing existential pressures for survival.
Of course they must deliver returns that make holding the trust worthwhile in the long run. Discount risk is a headache for many everyday investors, too.
But the trusts do offer genuinely different exposure (compared to say a trust that owns FTSE 100 stocks) and I think we take them for granted.
Investing in private assets is not for everybody today. But there’s an argument to be made that one day it could be. Public markets globally are shrinking. We’ve also seen the rise of multi-hundred billion dollar unlisted ‘start-ups’ that most investors have zero exposure to – and hence do not benefit from.
Hopefully we’ll still have a vibrant investment trust sector to serve private investors if and when we need them!
Have a great weekend.
From Monevator
Returns aren’t average – Monevator
Investing in the face of AI: beauties or the beasts? – Monevator [Mogul members]
From the archive-ator: Gagadom and The Grim Reaper – Monevator
News
Lower food and fuel prices drive inflation down to 3% – BBC
Record-breaking budget surplus as government’s tax income rises – Sky
Household energy bills forecast to fall by £117 a year – Guardian
UK unemployment hits highest rate for nearly five years – BBC
Small investors in Brewdog reeling as brewer put up for sale – This Is Money
Man receives £42,000 bill for data roaming charges in Morocco – Guardian
Supreme Court rules Trump’s ’emergency’ tariffs are illegal… – CNN
…and Trump announces immediate 10% global tariff after the rebuke – Sky
UK flat prices fall after sharp drop in London [Paywall] – FT
Record number of buy-to-let limited companies set up in 2025 – This Is Money
Products and services
Disclosure: Links to platforms may be affiliate links, where we may earn a commission. This article is not personal financial advice. When investing, your capital is at risk and you may get back less than invested. With commission-free brokers other fees may apply. See terms and fees. Past performance doesn’t guarantee future results.
Are you ready for HMRC’s Making Tax Digital self-assessment shakeup? – Guardian
Natwest offers £150 switching bonus and a 7% savings rate, with a catch – This Is Money
Should you use a mortgage broker to get a mortgage? – Which
Get up to £1,500 cashback when you transfer your cash and/or investments to Charles Stanley Direct through this affiliate link. Terms apply – Charles Stanley
The best auto savings hacks and apps – Be Clever With Your Cash
Tides of tax drive high-earners to offshore bonds [Paywall] – FT
Banks slash mortgage rates for first-time buyers with small deposits – This Is Money
Password managers’ promise that they can’t access your vaults isn’t necessarily true – Ars Technica
Get up to £3,000 cashback when you open or switch to an Interactive Investor SIPP. Terms and fees apply, affiliate link – Interactive Investor
Tembo HomeSaver review: 5.75% if you’re saving for a house [Catches!] – B.C.W.Y.C.
Five ways that AI could be reshaping your relationship with money – The Conversation
Does your car insurance really cover stolen possessions? – Which
Homes for sale with luscious lawns, in pictures – Guardian
Comment and opinion
The brutal hunt for low-paid work: “It’s like the Hunger Games” – Guardian
Inflation matters more than returns to retirees – White Coat Investor
“Why I’m telling my kids to saddle themselves with student debt” – This Is Money
Is it better to rent or buy when you retire? [Paywall] – FT
Young Britons on why they’ve left to work abroad – Guardian
AI comes to FI – FIRE v London
Don’t major in minor things – A Teachable Moment
The best strategies for boosting starting withdrawals in retirement – Morningstar
US equities are still 40% more expensive than non-US equities – Apollo
Escaping the permanent underclass may not be necessary – Financial Samurai
Active managers keep losing as passive investing grows – Larry Swedroe
Naughty corner: Active antics
The eternity of intelligent investment – Kingswell
How to be one of the less terrible retail investors – Morningstar
Fund beating 99% of peers says few software firms will survive AI – Bloomberg via Yahoo
Let’s talk about the new Berkshire Hathaway – Brooklyn Investor
Mad money and the big AI race – OM
Bayes and base rates [Nerdy, PDF] – Morgan Stanley
Kindle book bargains
The Wealth Ladder by Nick Maggiulli – £0.99 on Kindle
How to Work Without Losing Your Mind by Cate Sevilla – £0.99 on Kindle
Million Dollar Weekend by Noah Kagan – £0.99 on Kindle
The Retirement Handbook by Ted Heybridge – £0.99 on Kindle
Or choose an investing classic – Monevator shop
Environmental factors
Plug-in hybrids use three times as much fuel as claimed, analysis finds – Guardian
China is killing the fish – Noahpinion
Environmental groups sue Trump’s EPA over repeal of climate finding – Guardian
The curse of dead coral – Biographic
Robot overlord roundup
Why AI writing is so boring and dangerous: semantic ablation – The Register
The AI disruption we’ve been waiting for is here – NYT [h/t Abnormal Returns]
AI takes a swipe at the online dating scene – PitchBook
Video game experts says Google’s Project Genie isn’t an industry killer – Sherwood
The AI productivity takeoff is finally visible [Paywall] – FT
Not at the dinner table
The UK’s trade after Brexit: not pretty – Klement on Investing
How Brazil stopped a Trump-style authoritarian in his tracks – Vox [h/t Abnormal Returns]
Trump’s new world order is real and Europe is having to adjust fast – BBC
The grift economy is going mainstream – Your Brain on Money
Trump’s relentless self-promotion fosters an American cult of personality – NYT
Office versus work-from-home mini-special
The end of the office – Andrew Yang
Younger companies and leaders embrace more remote work, study finds – Sherwood
The worse-case scenario for white-collar workers – The Atlantic
US office market is as K-shaped as the US economy – Bloomberg via A.P.
Off our beat
Why Europe doesn’t have a Tesla – Works in Progress
The biggest myths about attraction, debunked by science – Next Big Idea Club
Cringeworthy in the future – Kevin Kelly
And finally…
“I had thought the destination was what was important, but it turned out it was the journey.”
– Clayton Christensen, How Will You Measure Your Life?
Like these links? Subscribe to get them every Saturday. Note this article includes affiliate links, such as from Amazon and Interactive Investor.




![Checking in on private companies and crowdfunded investments [Members] Checking in on private companies and crowdfunded investments [Members]](https://monevator.com/wp-content/uploads/2025/09/flight-to-crap-9-2025-1024x960.jpg)
![How to value and account for private companies and funds, angel investments, and crowdfunded shares [Members] How to value and account for private companies and funds, angel investments, and crowdfunded shares [Members]](https://monevator.com/wp-content/uploads/2025/12/HGT-portfolio-1024x537.jpg)

@TI
Thanks for the links.
If anybody still needs some encouragement to embrace using real returns the White Coat Investor link is worth a read.
I am not quite sure what to make of FvL’s AI experience – maybe it is me, but it seemed to me like the AI backtracked too often to be useful.
Also I think the post about password managers might be of some concern (were I to use one) but again it wasn’t really shouting out “here be dragons” to me.
‘Why Europe doesn’t have a Tesla…’
Reads like an article written by AI. Same old stereotypes, same old tropes.
Translation : “Hey you guys…, why aren’t y’all like us ..?”
Perhaps ‘we’ just don’t want to be ‘Murican.
Don’t forget the Pantheon investment trust which, as far as I know, does pretty much all private equity.
@TI- we mentioned SMT in comments recently I think. I also have RCP which I bought at the same time, quite sobering to see the difference in performance over 20 years or so…I am much more passive now but I do still have a small proportion in investment trusts, I would be sad to see them disappear.
The Atlantic and Andrew Yang links are very sobering. If those outcomes came to pass what would the economy be like? Or through my construction background lens, if I was a PM and I get laid off because of AI, why retrain say as a plumber if there are no clients with the money to hire me, because of AI?
Thanks for the article.
Read the article on inflation. One thing about living through (a phrase which captures the experience) the 70’s is an intrinsic awareness of inflation. It wasn’t too bad for me, we got annual payrises of 8%+, but those on fixed incomes must have been through the wringer. As a retiree I worry about this.
On a separate note, you picked up a link about whether to use a mortgage broker. Any chance you could find something equivalent about annuities ? That might plug a hole in my knowledge.
Anyway cheers !
These “AI is eating computer programming” articles are so disconnected from my actual experience of AI tools that I wonder if we’re even talking about the same thing. My experience is being flooded with AI slop generated by careless colleagues. The code, the writing is all nonsense, often at a conceptual level. These coworkers don’t respect my time, they don’t even check that the slop compiles, never mind works. When I point out that what they’ve done fundamentally can’t work, they get all stroppy about it. Naturally management are encouraging all this.
The worst of it is these tools generate vast amounts of this slop, far faster than we can look at it and reject it. What would happen if you shipped any of this to a customer and it ate their data? How would you fix something you had no understanding of in the first place?
From the Atlantic article I found a link to a story about how some journalists “vibe coded” some SaaS application, causing the share price of the SaaS company to drop. This says a lot more about credulous journalists and investors than it does about computing.
Sorry, rant over, but a bit of scepticism is called for.
Have had a play with anti-gravity, and cursor before that. A year ago I “vibe-coded” an entire node.js website for one of my projects (a consulting practice I’ve setup for… reasons) and it works just fine (my last experience of web coding being HTML on wordpad). I also use Perplexity spaces to manage different clients and it gets 80% of the summarizing and drafting done – put another way, it allows me to (if I wanted to) work 4x as much – instead my semi-retired self gets to spend a couple of hours a day by the pool after an hour of “work”.
While the hype certainly gets ahead of the reality, unlike say, blockchain there is real economic substance in AI and it *will* render some lower level white collar jobs obsolete, or at least much less in demand. The structural problems of a white collar service economy like the UK/US do need to be addressed, if my kids expect to have a standard of living we experienced in the 90s/noughties.
@MJ – was also thinking about comparison of AI with block chain. Genuine utility vs zero practical utility. Everyone I know, who is busy, is picking up AI tools. Everyone I know through work under age of 35 is starting to use AI to produce code. Loads are already paying for AI tools.
I get the feeling that those de-crying AI are doing so ‘ because they can’, i.e they don’t have a pressing requirement to keep up?
Granted it’s not perfect, far from it, but it is already handy. I can’t see the genie being put back in the bottle?
@Rhino #8 Even as a cynical git about AI I use it to produce code, and indeed I learn from it, which is probably telling me what a rotten coder I was. It wasn’t my first love, I followed the company out of electroncis into it. And I do learn from AI in elementary coding.
It’s the only aspect of AI that I observe value in. What it is doing to the written word is horrible. I thought the SEO/content farm period was terrible, but the vapid written crap that is filling t’internet will turn us into a illiterate society probably within my lifetime. Not so much in that people can’t read, but that there will be no point, it’s literally all been said before and AI will slice and mash it up as required, which philosopically means the end of anything worth reading. We’re already seeing that with podcasts and audiobooks and the how to do anything migrating to Youtube.
I haven’t worked out whether the coding advntage of AI was my low starting level or a genuine shift, a bit like the move from assembler to interpreted/compiled stuff, and another step up to concepts rather than crafting algorithms. There is of course the usual worry that so far AI is trained on human written code, what happens when it eats its own tail, but I guess we will find out.
Panacea for the ills of the world it isn’t, and the shocking burning of cash and power/other resources is a serious downside that hopefully will stop when the absence of returns justifying such excess shows up.
In line with your experience, potentially automated coding is the real strong point (though I take @R #6 observation about this not necessarily being a straightforward bed of roses). On written text, it’s the boiler plate where it excels, not the beautiful prose. Think formal letters, policy docs, legal texts etc. I’m finding it gives me a good leg up in these areas. As well as coders being worried, I can see this coming for the legal profession in a very big way. In terms of understanding and producing legal type texts, it’s super useful.
I think it’s interesting that @rich @mr_jetlag and even @ermine can report somewhat different results re: coding and AI.
My friend I mentioned in my recent Moguls piece who said he can only really see himself coding ‘for fun’ in the future is a convert. It’s taken him three years. This is a guy with a global top tier foundation in software engineering who has built his life on it. So how do I square that with @Rich’s comments?
Perhaps the problem is that he was an excellent coder without AI, and now he’s an excellent and — from what he has told me — more efficient producer of code *with* AI? Whereas the people turning out software slop were poor coders with and without?
I think there’s every reason to expect AI coding to continue to get better, anyway. It seems the perfect use case, making fake sci-fi videos featuring Trump as Emperor Palpitine or whatnot.
@Al Cam — I wasn’t sure what to make of the password manager post either. Included it because of the strong response to the article we ran on password managers on Monevator recently. It seemed a rather stretched example vulnerability.
@Trufflehunt — I agree to an extent, except that guy does write good copy and it’s true we don’t have a Tesla. I’d not though that ten years ago we had BMW and we still have Ferrari. I think the problem for Europe is more from the ‘software is eating the world’ categoey (i.e. US tech) than proving out US labour practices 😉
@Hospitaller — Oh yes, there’s loads. 🙂 Harbourvest is interesting and I owned until recently. Small ones like Chrysalis and Molten (own both currently).
@Larsen — I think RCP might just be turning a corner…I reluctantly sold some recently to buy into a couple of things sold off in this year’s software rout, but I’m keen to re-up. Time will tell. Agree about the Andrew Yang piece, it sent me scurrying to remind myself exactly what his bona fides are.
@Mr Optimistic — Whenever I must over annuities I always include one of the inflation options. I can’t imagine the terror of being on a genuinely static income (state pension aside) although I guess I could just ask my mum, as my dad resisted my suggestion decades ago that he inflation-protected his upcoming post-work income. (It was modest and he was dead a few years after retiring so maybe it was the right call by him, it’s too dark and sobering for me to figure out :-\ )
@rich @mr_jetlag @Rhino — Interesting points, so my general reply above. As @rhino says this stuff is out now and only going one way, whatever the current state of play. See my Moguls article for more! 🙂
@ermine — In case you missed it, have a read of the ‘Semantic Ablation’ article in the links. Thought of you and one of you recent rants when I read it!
“only going one way”. We’ll, it all depends.
What’s meant by ‘one way’? Inevitable ‘progress’. Plausibly (?) Likely (?) But Inevitable. Not.
Change? Yes. Accelerating uncertainty. Yes again.
Highly dispersed outcomes (at the jagged edge of now). Indeed so (maybe the future really is already here, but just not evenly distributed).
What starts in cyberspace doesn’t necessarily always stay there.
Meatspace is potentially on the menu.
Will Reinforcement Learning Human Feedback lead to Continual Learning?
Will the deterministic blend with, and bend, the probabilistic, as neural nets become neuro symbolic syncretic systems?
Is embodiment necessary?
Will we get it anyways (eventually a trillion agents in a trillion machines, a true Internet of Things)?
Can there be intelligence without an emergent consciousness?
All of these are very (very) open questions to formulate, and to frame.
We must not fool ourselves into thinking that we yet have the (or indeed any) of the ‘answers’. Richard Feynman absolutely applies here (“the first rule is you must not to fool yourself, and [to recognise that] you are the easiest to fool”).
Tom Pueyo, over at Uncharted Territories, has a a mental map, of sorts, for his vision for an AGI future here: “compute has been progressing by 10x every two years, and must continue doing so. And here we are: The new NVIDIA architecture, Vera Rubin, reduces token (“thinking”) costs by 10x over the previous architecture (Blackwell) and training costs by 4x. As a result of this and algorithmic improvements, models keep improving at an incredible rate. The cost per task has shrunk by 300x in one year”…
And here:
“Computers will keep improving about 2.5x every year, so that by the end of 2030 they will be 100x better. We will also continue spending 2x more money every year, which adds up to 32x more investment by the end of 2030. These two together mean AI will get 3,000x better by 2030, just through more (quantity) and more efficient (quality) computers”…”[Between] 2012 and 2023 saw a 22,000x [algorithmic] improvement, which backs out to 0.4 OOMs [Orders of Magnitude] per year. So across the board, it looks like we can get ~0.4 to 0.5 OOMs of algorithmic optimization per year. We won’t have that forever, but we have had it for a bit over a decade, so it’s likely that we’ll continue enjoying it for at least 5 years. If so, by 2030, we should have optimized our algorithms by ~300x”
And finally also here:
“These AI performance evaluations (“evals”) were designed to last a very long time, but they last less and less. The ultimate eval (named “Humanity’s Last Exam”) was meant to last years or decades by making questions extremely hard for AIs. It hoped to remain relevant until AGI. Instead, in less than a year, GPT has gone from a ~3% score to ~32%.”
But a map is not the territory.
And we’re talking here about those bits where it says ‘here be monsters’.
We just don’t know. And we won’t know until it’ll be too late.
FAFO as they say these days.
If the worse happens, ASI arrives, and we depart, then at least we get briefly to ‘admire it’s purity’ as Ash/Ian Holme put it back in the day 😉
https://youtu.be/1Z5sX4qC5HE?si=9HQZvrHRbmMMUL6a
@Delta Hedge — Cheers for thoughts.
Note that I did’t say “one way FOREVER” 😉 But just refining the current state-of-the-art, improving post-training and tailoring for specific applications and verticals should deliver productivity benefits and get more people using (/potentially being displaced by) AI for a good few years to come.
IMHO you don’t have to think ’embodiment’ is around the corner to think AI technology is going basically one way, any more than you needed to believe HAL was imminent to see the potential medium-term course for the original IBM PC or Apple Mac 🙂
@TI #11 I read the original Semantic Ablation in El Reg when it came out, and did think finally somebody gets it, and at least a source which should have AI competence. I recently read Hemingway’s ‘The Sun Also Shines’ the novel with the classic quote how did you go bankrupt? – two ways, slowly and the suddenly and it was refreshing to read something totally untainted by AI. I’m starting to come round to the principle of only read books written by dead people – I see the ugly hand of AI in some fantasy writing now, though I guess SF/fantasy was always considered infra dig by literary experts so no surprise there.
AI coding was weird. At the level I used it, it didn’t teach me particularly algorithmic innovation. But it sure as hell knew its way around the various computer languages, and often had an elegance in approach or used language wrinkles/libraries/constructs I was unaware of. I only used general purpose AI like ChatGPT rather than coding-specific things like Claude and Replit, because I am too tight to pay for it. It only got things to run/compile about 70% of the time, curiously what needed to be done to fix that was often reasonably clear to me, though a further prompt ‘fix this often got it sorted too if I was feeling idle. Absolutely makes me a lot faster to get there from here than before.
@TI #13: it may be HAL that’s imminent or KITT, or something altogether more mundane, or something far stranger.
It’s the radical uncertainty that makes people tune out on the prospects or possibilities of autonomous advanced AGI/ASI I think.
Humans crave feelings of certainty. Pattern seekers.
We’d much rather believe something that’s demonstrably untrue than not be able to believe in anything at all.
The not knowing is the powerlessness. That freaks people out. Hence almost everyone anchoring on the past as a prelude for what comes next, even though what comes next may be without any precedent.
I just wish that within our ISAs and our SIPPs we could all go and buy some deeply Out The Money index calls and index puts using LEAPS and also buy some of that US CHAOS ETF which @Finumus likes ( 😉 ).
Feels like it’s past time now for some long term insurance re both the up and downside (over/under) exposure risk.
Of course, I’m anchoring there on an expectation that they’ll still be a way to collect if the strikes hit before expiration. Not much point being the richest corpse (on paper) in the graveyard 🙁
I’ll stick my brass neck out here though and predict that volatility will be up over the next decade compared with the last.
@DH. ” The new NVIDIA architecture, Vera Rubin, reduces token (“thinking”) costs by 10x over the previous architecture (Blackwell) and training costs by 4x. As a result of this and algorithmic improvements, models keep improving at an incredible rate. The cost per task has shrunk by 300x in one year”…
You’ve got to love NVIDIA’s ability to cherry pick the one inference benchmark out of 100 where they will get that performance upgrade. And then plaster it all over the media. Wonder how much training costs have improved? Too cynical? Yes, I’ve been buying their GPUs for well over 20 years. You learn how to discount everything NVidia says heavily.
Let’s imagine, though, that their numbers are correct. That means that all the hardware their customers have been stockpiling has just been made totally obsolete. It’s got zero value if you can get something that uses 1/10 of the energy. So the asset side of these companies’ balance sheets has just gone to zero. Their debt side hasn’t though …
Apologies for the second bite here, but I realised that there was a troubling question gestating in my subconscious (and now manifest) for @ermine, following #9 and #14 above (re: “turn us into an illiterate society probably within my lifetime” and “I see the ugly hand of AI in some fantasy writing now, though I guess SF/fantasy was always considered infra dig by literary experts so no surprise there”).
What would it take for you to change your mind that a future machine intelligence had, or at least could have, a human equivalent (or superior) creativity, and that (channeling Hawkins) fire had been breathed into the equations of physics to make that happen?
What if a future machine mind produced, unaided, and ab initio, sci-fi/fantasy’s own Ulysses’, namely Gene Wolfe’s Book of the New Sun tetralogy?:
https://ultan.org.uk/review-botns/
Or if it produced a superior Ulysses itself?
Would that then convince you?
What would be the standard of your proof, if any?
Would you prepared to change your mind on this at all; and, if so or not, then why so or why not?
@ZX: sorry spotted your post only after posting in response to @ermine’s.
But, to counterpoint, you can’t fake token costs. It’s a definite quantity. And 99.7% deflation annually is pretty impressive. In peak nineties dot.com days it was said fibre capacity doubled every 100 days (so a 91% costs’ reduction per bit/sec per annum, and a million fold reduction in cost per bits/sec over 1994-2000); but the sheer rapidity of token price reduction now seems to blow that one out of the water.
And alongside Capex (or is today’s Capex just tomorrow’s thinly disguised Opex?) doubling year on year, FLOP/joule going up 2.5x annually and (with algorithmic improvements) ‘effective FLOP’ per FLOP rising perhaps 3x or 4x p.a.; there are the harder to quantify and predict, and lumpier, ‘unhobbling’ advances, like Mix of Expert architectures. Given the potential for developments like reversible computation etc, there seems a long runway ahead (logarithmically long, not necessarily temporally long).
Although I have to say that I think that China’s eventually going to end up winning this race (with cheap but ingenious unhobblings and algorithic advances), and not either NVDA with bleeding edge (but bleeding expensive) GPUs or GOOG with their Trillium TPUs.
@DH
Nothing, since it’s a philosophical impossibility. Apropos Clarke’s any significantly superior technology is indistinguishable from magic, how would I recognise it for what it was?
Which I don’t temperamentally share your AI boosterism, and I don’t believe it will be emergently conscious, that’s by the by. Intelligent conversation should be able to hold contradictory positions at the same time, as per F Scott Fitzgerald.
Thusly, accepting your implied premise that some sort of future AI could produce prose so far ahead that makes Shakespeare look like a 9-year old, how the hell would I be able to tell it apart from line noise? Transmitter and receiver must have a shared symbolism, it doesn’t have to be 1:1 mapping but at least a useful amount of overlap. I will always be limited by my base humanity, your AI may be speaking complex highfalutin’ whatnot that’s just unreachable by me.
I’ve tipped my hat in this thread that there are things it can do better than me alone, thought I’d say it is the combination of human and AI that shows most potential, and even as an old git in the last quarter of life I was adaptable enough to at least observe that. But prose, literature, the written word in general, current evidence is that AI sucks bricks and is destroying literacy by churning out endless meaningless pablum, the Register ablative article hints at how the enshittification works but it’s easily observable in the wild.
It doesn’t have to be this way, but I postulate AI’s achilles heel is in the art and humanities, I used AI header art for a couple of posts because I can’t draw for shit, but in the end I concluded it lacked integrity, because the offensive sameyness was still there, just in a different medium. FWIW I A definitely not of the humanities side of CP SNow’s Two Cultures and I failed Eng Lit at school, but even I can see AI writing stinks, and can usually clock it by the second paragraph.
I will admit I didn’t allocate enough clock cycles to grok your post #12, I don’t quite share the passion for AI and I don’t think it will be anywhere near as transformational as it’s made out to be. But what I think doesn’t matter, it’ll be what it’ll be, though I have rotated out of some American exceptionalism so I’ll miss out on some of that. I sleep easy enough, notwithstanding.
I take all your points @ermine. Tbf, if you skim the ‘Call centre’ thread you’ll see me veering like Boris’ trolley between the aisles signposted doomerism, denialism, ‘dismissism’ and boosterism. I ain’t got a clue what all of this AI stuff means for the future (or even just right now); but I’m pretty confident that noone else does and, as such, I think you have to try to keep a bit of an open mind, whether you love it or loathe it (or neither).
Per my comment #5 in this month’s Moguls’ piece, I’m actually steering away from this AI/ML stuff (and the US) just at moment (although I might reverse course in the blink of an eye).
Personally I’m finding enterprise LLMs simultaneously both impressive/ useful and underwhelming/ disappointing. It’s very….weird.
On actual asset and securities mix (Taleb’s “show me your portfolio don’t tell me your ideas” approach), there’s just too much else other than Yankee big cap tech which is arguably rather more attractively valued on either or (more often) both a cross asset and a relative to history basis, and where the price doesn’t necessarily reflect the anticipated perfection of years to come of aggressive revenue and margin growth.
P S.: I thought you might appreciate this meme top on A(I)nxiety:
https://open.substack.com/pub/shrubbery/p/ainxiety
My two-penn’orth on AI.
I have been working with AI’s off and on for 40 years. From rules-based systems, neural nets and QSAR’s to large language models. I have no experience with their use in coding, so I can’t comment on that, but I have used them extensively in research and analysis.
The AI’s based on statistical methods are particularly good at identifying interesting places to look within a problem space, and I have brought several products to market based on AI suggestions. What I have observed over the years is that this kind of tool is effective within the boundaries of the training data, but as soon as you go outside that, the results become very flaky. A great deal of waste was created by people enthusiastically projecting AI investigations where there was insufficient training data. I see the same thing happening with LLM’s.
The best use case for my situation remains asking the question, “Have I missed anything?” When I’m writing a report, it is useful to check whether I have missed any important ideas, counterarguments or examples. I always ask for citations, and I always check them. Depending on the domain, between 1 in 10 and 1 in 3 items it finds for me are hallucinations or wildly misunderstand the author’s meaning. So for me, it is a good search tool and a moderate search assistant. The AI’s I have access to are also reasonably good at summarising complex documents; especially when the authors have not bothered to produce a readable summary themselves.
So I am with @ermine on their potential for art and literature. The current versions are good at plausibility, whilst being rubbish at creativity. How soon will we reach AGI? I don’t know, but we are very, very far away from the kind of associative and intuitive leaps the human mind excels at.
The problem is the AI boosters are pushing the current tools into places where they don’t work. Last night I watched Prof Hannah Fry’s new programme on AI on BBC2. It did not have much on the mechanics of AI, but it did look at the problems of some of the use cases. Particularly, examples of AI-induced psychosis where vulnerable people are dragged into an imaginary world supported and encouraged by chatbots. The technical challenge seems to be that chatbots need to be supportive and encouraging for users to use them. Chatbots that are more questioning and challenging are seen as aggressive, argumentative, and unattractive. So chatbots that suggest suicide might be a bad idea, or that plotting to assassinate the Queen of England might be a step too far, would not survive. As expected, the purveyors of AI see this as entirely a problem of the users. “We just make the tools squire, it’s not our problem how people use them”.
We are in an interesting societal and technological race between the valid and limited uses of the AI we have today, and the damage that can be done to individuals and society by releasing these tools into the wild with absolutely no guardrails or consequences.
That’s extremely helpful and thoughtful insights from a deeply knowledgeable insider perspectives @old_eyes #21. Thank you ever so much for sharing them here 🙂
Do you think that the paradigmatic / architecture limitations of neural nets as so far realised are overcomable in practice by a pivot within normal investing horizons (say 30 years out to the mid 2050s), i.e. do you think that we’re likely in practice to both successfully change course and to get to full AGI by then?
Or is it all just a pipe dream?
I ask in recollection of what a colleague said and I (mistakenly as it turned out) agreed with when Covid struck, namely that it had been (in early 2020) 18 years or so since SARS Cov 1 first struck Asia and (at that point) we still had no vaccine for it, so we both thought that we wouldn’t get one quickly for this (then) new coronavirus.
What the colleague and I underestimated (or misunderstood) was the orders of magnitude greater research intensity and funding availability as compared to 2002/3 (and immediately post), which produced rapid breakthroughs and several effective vaccines for mass production in 2021.
Could it be the same for a pivot away from LLMs now/soon towards some better approaches that can collectively get us to continuously learning and fully autonomic generalised AI?
And one that can run on a phone battery in a toaster sized device rather than running off a nuclear power plant hooked up to a data centre the size of Manhattan whose GPUs and TPUs need replacing every couple of years (??)
@DH nice shrub quote in #20 😉
> Could it be the same for a pivot away from LLMs now/soon towards some better approaches that can collectively get us to continuously learning and fully autonomic generalised AI?
AIUI a LLM is a particular case of AI, f’rinstance I don’t think they use ChatGPT for the protein folding stuff which seems to be a genuine advance on anything before. LLMs have their uses – they can generalise code well in excess of my limited experience so in my case man + LLM is a better system than man alone.
But I just don’t think it’s valid to infer the general from the particular. I’m sure there will be good non-LLM AI applications using as yet undeveloped approaches. People will find good applications for LLMs. But to infer from recent progress with LLMs, which to my eyes seems to be flattening out, at least as far as the abuse of the written word, that AGI will far oustrip humanity, well, it remains to be seen but it looks like one hell of a jump from what we’re seeing now.
It is regrettable that one of the greatest applications of LLMs is to enshittify all the things, particularly the information space, and at pace. That may make Zuckmanfried very very rich, but I wouldn’t say it’s improving the experience of the internet any. It’s not without niche value, search can be improved by AI provided you drive it on manual, but drowning us in slop is Not A Good Thing IMO, to the point that it is becoming more rewarding to search for interesting stuff on the net using methods that discriminate against Big Tech in favour of the small web of years gone by.
IMO in ten years’ time people will look back at the LLM fad as a craze like pokemon. I’m not saying people will be using AI less in ten years’ time, or that LLMs will have disappeared, but perhaps there will be more diversity in approach. I am extremely relaxed about the putative hazards of AGI turning humans into grey goo to fuel their dastardly plans of world/universal domination, however. These tech bros need to get out more.
@old_eyes. Your experience mirrors mine. As a teenager in the mid 80s, I remember articles in BYTE magazine talking about how symbolic AI, and agents based on that, would revolutionize the workplace. That idea peaked in ’87 and was quietly forgotten by ’88. In the early 90s, I spent my summer vacations from undergrad developing neural nets to do signal processing. Early progress implied great things but these things tailed off, eventually becoming just another tool in the armoury used by my employer.
As complex as the LLMs are, they are still based on something that looks very much like the verbal analogy of a stochastic curve fitting algorithm. Curve fits often are poor at extrapolation beyond the initial data set. Moreover, as the data set becomes ever more complex, there is this assumption of convergence to a stable solution. That is rarely the actual case. Instability is more often the outcome. Scale independence is not guaranteed.
I see no special sauce in the human condition that means AGI is not possible. I see many genuine use cases, such as types of coding. I’m just not sure LLMs are the complete answer. Necessary but not sufficient. Plus, I see an awful lot of LLMs either solving problems that were caused by the prior generation of technology or just generating new dross. At my office, we all use LLMs to some degree. They are helpful but they are not reliable. Moreover, actual progress in genuine machine learning on the hard problems in finance is slow. That despite large amounts of capital, vast compute power and numerous PhDs being thrown at the problems.
I’d also point out, that there is a substantial overlap between the type of very smart people who are currently doing this research and the type of very smart people who also extrapolated rapid progress in areas such as string theory, fusion, quantum computing etc. They are genuinely very smart, but it doesn’t always play out. Albeit the amount of capital at risk perhaps makes the current AI boom now “too big to fail”.
@ermine: there’s definitely a sort of anthropomorphic preference for seeing human value in word prediction tasks, but not in the sorts of somewhat more abstract ones like the protein folding solving, as undertaken by Google’s AlphaFold/ DeepMind.
Pun intended, but it gets ‘meta’ here, as Perplexity cites Claude and Gemini as sources to explain the differences thus:
“DeepMind’s famous systems like AlphaGo, AlphaZero or MuZero are mainly reinforcement learning (RL) agents that learn by acting in an environment, while LLMs like Gemini or ChatGPT are large predictive text (now multimodal) models trained mostly by next‑token prediction on static data”;
And,
“DeepMind RL agents (AlphaGo/Zero, MuZero) learn from trial and error: they play games against themselves (self‑play), get rewards for winning/losing, and use RL plus planning (Monte Carlo Tree Search) to improve a policy and value network”.
All very Joshua in WarGames to my mind, whereas LLMs are pure curve fitting.
@ZX: whilst things have slowed down a fair bit from the manic pace of progress in the years from the Fifth Solvay Conference of 1927 to the emergence of the Standard Model over 1973-75; fundamental physics still isn’t doing too badly since the first String revolution of 1984. M-Theory. The Holographic Principle. The Swampland project. LQG alternatives now falsified. And whilst we may not obviously live in AdS space, our universe could be an effective de Sitter brane in an AdS space with a 5-D cosmological constant from the bulk and the 4-D constant from the brane giving accelerated expansion:
https://arxiv.org/pdf/2010.03391.pdf
On AGI, with so much resource being thrown at this now (already more than was expended on the building of the interstate highways and on the Manhattan and Apollo projects combined (as a share of GDP, it’s far more in real terms), and with AI related expenditures soon to exceed the rollout of the railways), I can’t help but feel that the prospects for a breakthrough are far higher now than before hope (last) turned to disappointment just before the previous two or three AI winters.
Unless one subscribes to Roger Penrose’s out of epistemic lane speculations on the necessity of microtubules and quantum effects for consciousness (very fringe), then there has to be a way to do what the brain does with at least equal economy and efficiency in terms of energy and volume/mass. It’s all just arrangements of matter etc.
> It’s all just arrangements of matter etc.
We are all prisoners of the ontologies we choose to live inside 😉 An alternative example of the one I think you have cited in the last sentence is the cryogenics crew who have the ambition to reactivate dead heads. This starts from a closer copy of the hardware. No success to date
@DH. I would, respectfully, disagree with your view on the foundations of physics. As Wolfgang Pauli said “it’s not even wrong”.
String and m-brane theory have gone down a rabbit hole. Swampland, multi-verses, totally unphysical de Sitter models. None of this is empirically testable. It’s not physics. At best they are producing some beautiful math. At worst, it’s just theology.
There is absolutely nothing wrong with that. The idea that progress in areas such as the frontier of physics should be continuous or linear is ridiculous. Even if 99% of the lives of some of the smartest brains humanity possesses are wasted in what, in hindsight, turns out to be futile research, that is still an amazing return on the investment.
Nonetheless, progress has been minimal. Nearly all of those who I did my PhD with, most Professors at very elite institutions, know this but cannot say it aloud. Many have slid quietly into other areas, including quantum information/computing and AI. They apply the same level of hard work and optimism to these areas as they did to Brane theory three decades ago. They also extrapolate current progress as they did three decades ago.
None of that takes away from the fact that, yes, AI will be impactful. That’s guaranteed. Just because of the vast economic outlays. This is spending on a war footing. Yet, if we had deployed even a fraction of this money on fusion, would we have commercial reactors by now? Would that have been more impactful?
These comments are getting into some really tasty philosophical areas, but I will stick to the AI question for the moment in response to @Delta Hedge #22.
When will we reach AGI? It turns out we don’t have criteria to tell us when it arrives. A lot of the variation in AGI timelines comes from the definitions people choose. One definition is – can take professional exams. Well, I have taken quite a few and set some, and professional exams focus on knowledge recovery and exposition. Sometimes, they include ‘trick’ thinking questions, but mostly, for good reason, they are about ‘do you know your shit?’ That sort of task is perfect for the AI’s we have today, so the timeline estimate of 2030 feels possible. At the other end, a measure of AGI is – can replace AI researchers. This is much further away, estimates 2050+. That feels more realistic to me as I don’t think we have any idea how to do it yet.
The majority of AIs today are probabilistic. Given a trajectory through a multi-dimensional information space, what is the most likely thing to come next? That’s LLMs, and also the protein folding (with added constraints about configurations that are geometrically or energetically impossible). There is no reasoning, just probability. For that reason, I think there will need to be a change of approach to achieve what I would call AGI. Simply throwing money and scaling the methods we have today is unlikely to succeed, unless we reach the point of fully modelling the brain.
The link to mRNA vaccines and Covid is interesting. That was an example where all the necessary science and technology was available, but we had no compelling reason to use it. mRNA research leading to the vaccines started in the 1960s, but we couldn’t use it because we had no delivery mechanism. That came around the 1990s (from memory, I am away from my references). Then mRNA flu vaccines were tested in mice around the 1990s, a rabies vaccine tested in humans in 2013, and ebola vaccines also tested sometime around there. So all the bits required for widespread use of mRNA vaccines were in place, and the benefits were clear. What was missing was a driving market need.
Along comes Covid, and suddenly we have a pressing need, and money is no problem. Everyone was scrambling for a vaccine, and both conventional and mRNA were used. mRNA vaccines were a technology ready for deployment. The money mattered, but so did the readiness.
The Manhattan Project and the Apollo mission to the moon were other examples of throwing money at a problem to speed things up, but again, most of the required bits and pieces already existed, from scientific theory to materials to engineering.
In the case of AGI, I am not sure we have all the bits we need. Maybe quantum computing will unlock the puzzle. But at the moment, we are investing huge amounts in an approach that will solve certain types of problems and yield some specific benefits, but not, in my opinion, AGI. We have not yet reached the stage where money solves the hard problem.
There is an interesting article by the climate scientist Zeke Hausfather on how AI works for him. Assembling data from disparate sources, visualising data, analysing data, building websites, experimental design etc – great. The insight and intuitions required to figure out what it all means – no help. https://bsky.app/profile/hausfath.bsky.social/post/3mfmq7c4bhe2r
The future many of us hope for is the Centaur – human guiding intelligence and creativity strengthened by AI. The fear is what Cory Doctorow and others call the Reverse Centaur – humans reduced to a cheap pair of hands for an AI.
@ermine, @ZX and @old_eyes. Thank you all for your thoughtful and thoughtful provoking replies. Sorry about the delay in replying (work 🙁 )
Yes. We don’t even have a route map, less still any verified means of transport, to get us to AGI.
But it exists (surely?) somewhere as a high complexity, low entropy peak (perhaps a very narrow one) in the landscape of ‘phase space’.
We might be close to it in the landscape, or not; but without sight of the AGI peak, how can we ever know? Should we press on or turn back? Who can say? Decision making under radical uncertainty.
But…we certainly can’t lean in too hard though on the current cul de sac of LLMs.
I mean, what could possibly go wrong?:
https://open.substack.com/pub/garymarcus/p/code-red-for-humanity
As far as I can tell the last time 2% of GDP was spent on an idea was on the Manhatten Project. It had a timeline and an outcome, and incidentally the majority of the founder companies involved in that remain in business today. Its hard to describe the pursuit of AI in the same terms – what’s the end state and by when? Does this satisfy the basic requirements of good business or does it look more like a hobby?
@ZX #16 (23/2/26): “it’s got zero value”: I think part of the problem here is that we lack a standard ruler:
https://open.substack.com/pub/davefriedman/p/the-compute-market-is-building-in
> I think part of the problem here is that we lack a standard ruler:
I’m not sure one is possible in this field. To take an analogy – many moons ago I owned a 286 based PC. Theoretically it would work as well as it did in the late 1980s, but ZX is right. It’s got no value, even if it functions. You can’t have a standard ruler in a measure of the value of a complex system that becomes obsolete of a period of a few years due to improvements in hardware. This is why old PCs, digital cameras, smartphones depreciate to zero over a period not usually exceeding ten years.
Compared to the cited reference’s example, a mmBTU of gas does the same today as it did fifty years ago, and a standard ruler is possible.People have tried to measure compute power, eg in petaflops, but if floating point ops is not relevant to AI or some aspect of parallelism is needed which is not captured by the measure, and worse still if the weighting of these parameters varies over time then you just can’t define the unit in a useful manner.
@ermine #32: Capex depreciation treatment is the most egregious example of the radical uncertainty around the numbers.
But given the unlisted nature (for now) of the whole Open AI/Anthropic/xAi complex, and given that results for Gemini (and Perplexity) aren’t (at least properly) broken out from Google’s financial statements, nor Copilot from Microsoft’s; we really don’t know, less still understand, whose making what in respect of which lines of business, how they’re making it (if at all), what the true (all in) costs of sales are, and, therefore, just how big the losses/rate of cash incineration is. And that’s without even thinking about the future!:
https://www.wheresyoured.at/the-ai-bubble-is-an-information-war/?ref=ed-zitrons-wheres-your-ed-at-newsletter
They’re (far) more questions here than they’ll ever be answers for; that’s the only statement that I’d make for sure (at this time).
All is a ‘storm of mysteries’, or as Frank Herbert puts it in GEoD:
“Questions are my enemies. For my questions explode! Answers leap up like a frightened flock, blackening the sky of my inescapable memories. Not one answer, not one suffices….I am a chip of shattered flint enclosed in a box. The box gyrates and quakes. I am tossed about in a storm of mysteries. And when the box opens, I return to this presence like a stranger in a primitive land.”
In makes thinking about what might happen next in the Middle East, and it’s impacts on economies and markets, look like a walk in park in comparison. A total guessing game across multiple layers. I can scarcely rule anything out or anything in.
> I can scarcely rule anything out or anything in.
I view it as religion, particularly the AGI thing. Trying to apply logic is using the wrong tool for the job, it’s a battle of belief systems. Sure, what LLMs do can be impressive at times, particularly for folk who have no grounding in the humanities (like me 😉 ) but I am slightly immunised by the cynicism of age. AI has great talent at producing plausible pap, the hard problem for many seems to be telling the rubbish. You really have to watch search these days, AI can lie with verve and aplomb in ways no human can, as well as all the usual human error it reproduces.
It is surprising that so much of coding seems to be doing similar things to what others have tried before, which seems to be how vibe coding gets ahead in less talented hands.
> what might happen next in the Middle East
Perhaps raising the price of energy may hasten the AI reckoning, by shaking the excess leverage out. As ZXSpectrum48k said upthread, imagine what all this nervous energy and capital could have done for us if applied to some of the known difficult problems we have rather than trying to play God and create an entity in our own image, that never ends well. We could do with getting our brightest and best back on this planet, and their heads out of the clouds.
Guess that you’re firmly in the camp of “it’s only a tool”, and of making arguments from experience over making assertions from ‘evaluations’ (“evals”, in SV speak….)
https://open.substack.com/pub/erikhoel/p/bits-in-bits-out
IJDK. I use this stuff in the day job. Never had anything so useful. I can’t imagine it’s not hitting graduate employment into the knowledge economy. Why spend £40,000-£60,000 (+ pension contributions+ 15% employer NI + paid holidays and sick leave) on a know nothing grad when you can spend £200-£300 pcm on something that outputs 10x-100x as fast, and which, with iteration and human QC, can be better. Shit for creative writing for sure, but for reports, assessments, advice blah blah. Maybe not so sceptical there.
Of course, the overhype factor is probably already killing people, as I’ll link to from Gary Marcus recently over on the ‘First they came for the call centres’ thread.
Yep, nice citation. I like the strapline
I’m with Erik Hoel. I don’t think he’s saying AI can’t be useful – but like any tool it needs to be in the right hands.
> I can’t imagine it’s not hitting graduate employment into the knowledge economy
Ever since we had a target of 50% school leavers for graduates, graduates ain’t what they used to be. If AI is useful, particularly in the right hands, then this is only to be expected. Too many folk want to enter the professional and managerial class chuntering out pedestrian pap. There’s a whole load of people making a lot of things more complicated than they need be. There are a a lot of reports that don’t get read. As an example of this PMC overcomplication I wanted to get a group to survey some wildlife. You qualify habitat these days using UKHab particularly if you want to demonstrate a biodiversity net gain (BNG) – it took me half a day to decode the acronym because I don’t swim in the right circles. UKHab is free but designed to be impossible to use unless you have an approved paid for app. Fortunately somebody has a QGIS templae on github that will crack this for me by inspection and itelligent hacking. The UK has an honourable and extended tradition of amateur naturalists from Gilbert White’s Natural History and Antiquities of Selborne onwards, but the PMC needs to find jobs for many also-ran grads and so it tries to turn things into a closed shop, in this case for ecological consultants. Give me one volunteer for ten pressed men and all that, Gilbert must be spinning in his grave.
That’s a minor example. AI used in intelligent human hands will bust the ass of a lot of make-work, because make-work is pedestrian enough that it will have been done before. These are jobs that should be automated out by AI and they will be. In an ideal world we’d fight the fire at its base and not chunter out this make-work, but if you gotta have make-work then pay AI rates for it.
What is dangerous about AI is the AGI believers, because they will burn outrageous amounts of capital in pursuit of a chimera. The evidence is mounting that if you want AGI then this is not the way to do it. By the fruit shall you know the tree and and the bleeding edge of literacy the output of AI stinks. Sure, literacy is a minor aspect of AGI, but yer LLMs are not the top 10% of talented writers.
AGI is a beautiful theory. So is world peace and the harmony of the spheres. As ZXSpectrum48k highlighted, this is not the first time in the history of the world that a load of incredibly clever people have chased down dead-end ratholes of beautiful intellectual constructs and it won’t be the last. In very small ways I often chased the theoretical perfection at the expense of useful practice, people who live in their heads too much just do that.
In ten years’ time we won’t be able to understand how people got things done without AI assistance, but the sooner we can lose the religion of creating intelligence in our own image the better for the allocation of resources to all the other things that could do with fixing in the world. The current hype reminds me of the classic bleary-eyed Vegas gambler on his uppers but thinking just one more turn and it’ll be all right. And it’s a real bitch that this leverage is now looking like a major threat to the financial system, which in our case is already weakened by the secular decline of the west.
@ermine #36: re: the pursuit of a chimera: it certainly *could* be that the dreams (nightmares?) of America’s technogarchy are a collective version of Girolamo Savonarola’s fifteenth century Bonfire of the Vanities, and that we’re not living in some sort of ‘end times’ ante chamber to ‘The Singularity’ (such that it will turn out that now is no more hingey than at any other moment in history).
But I wouldn’t want to stake my life on it.
I wouldn’t even want to stake all my investments in it either.
There’s a very good argument that things are getting pretty hingey. Maybe there are Great Filters (Fermi level) behind us AND immediately ahead.
No reason why it has to be other or the other. It could be both.
So I’m ruling nothing in or out.
Per my Call Centre thread contributions, I’m placing the odds of human survival if there’s ever full ASI at around 50% with uncertainty bands around that risk from 10% (optimistic) to 90% (pessimistic);.
I’m pretty clear we’re not getting anywhere close to, or anything like, AGI, less still ASI, with LLM only approaches.
Thing is, humanity isn’t only pursuing LLM approaches (although they’re hovering up most of the money and the brain power for now), and as a species we’re very definitely not limited in the future to only trying that, and only that, architecture.
Eddison had a thousand goes at the filament bulb before eventually getting it to work, but he did succeed. Those thousand attempts weren’t, as such, ‘failures’, just ruling out other, as it turned out, sterile possibilities.
The more immediately plausible risk of Darwin Among The Machines (per the first AI futurist, Samuel Butler, writing in 1863) is of humans misusing narrow AI/capable but limited scope ML. Herbert’s 10,000 year hence Luddite crusade against the thinking machines (the Butlerian Jihad), which is 10,000 years in the past in Dune, is on that very premise. It wouldn’t be the first time civilisation has tried to turn back the clock.
In any event, even without either AGI/ASI or human misuse of narrow AI/ML there’s a realistic scenario this turns out Ok:
https://monevator.com/weekend-reading-first-they-came-for-the-call-centres/#comment-1939917
We probably want to aim for the max realistically attainable possibly of an at least Ok outcome. God knows what that means in practice now. We can’t successfully do a Canute and hold back the tides. But like the good Dane we might try unsuccessfully to do something along those lines if it looks like it’s all going ‘a bit Pete Tong’, as they used to say in my misspent youth.
Yeah, I’ve been cynical on The Singularity roughly for as long as I’ve heard about it, I expect to live long enough to see Ray Kurzweil planted in the conventional way. But as that movie mogul William Goldman said, nobody knows anything. The screeds of conflicting AI bloviation you cited a little bit before the reference seems to confirm his wisdom. Personally if I saw that amount of white noise on the radar I’d switch the damn thing off and go have a beer and watch the sunset, and accept we take the incoming or not as and when it shows itself.
The amount of capital being poured into GPUs and whatnot while nobody seems to know how to really turn a profit in AI as a service is all very well but in the end things that can’t go on don’t, even if you can’t call the specific date of the Minsky moment. This isn’t particularly susceptible to mentation, we aren’t privy to the details, and even if we were, capitulation is an issue of vibes as well as finacial reckoning.
I think what is clear now is yes LLM AI can be useful in some cases, no LLMs will not produce AGI, and it seems to be the devil’s own job to turn a profit in providing AI as a service. Much more than that we have insufficient information to determine. As previous examples of the class of problem, fusion power has been 20 years away from before I was born, flat screen displays were always five years away from the 1980s until they actually happened in the late 1990s. As WG said, nobody knows anything 😉
[NB: Apols for all my earlier typos. Walk and talk works, but never try typing and walking.]
The future ain’t what it used to be, as Yogi Bera put it. No unmetered electricity from clean fusion. No Blade Runner spinners or Marty McFly hoverboards. No spinning Bernal Spheres waltzing the void to the strains of Strauss’s Blue Danube. It’s not quite full on Thiel “We wanted flying cars, instead we got 140 characters” territory, but the ability to partly automate some aspects of (largely BS) PMC desk roles is not getting us to Star Trek the Next Generation anytime soon.
A lot of people though have their reps and their wodge invested in this ‘thing’ being ‘real’.
It seems that Pete Hegseth’s DoW has fallen (hook line and sinker) for the Singularitarians’ persuasive patter:
https://open.substack.com/pub/thezvi/p/anthropic-officially-arbitrarily
They think that Claude is AI and that AI is the next atomic bomb. They’ve been sold a pup without even realising it. If you can’t work out who the mark is, then it means that it is probably you 😉