What caught my eye this week.
This week, the world of investing is buzzing about ChatGPT, a revolutionary new development in the field of artificial intelligence and natural language processing.
ChatGPT, or ‘Chat Generative Pretrained Transformer,’ is a large language model trained by OpenAI. It has the ability to generate human-like text based on a given prompt, making it a powerful tool for a variety of applications.
One of the most exciting possibilities for ChatGPT is its potential to disrupt the world of online communication. With its ability to generate realistic-sounding text, ChatGPT has the potential to revolutionize the way companies communicate with their audiences.
For investors, the emergence of ChatGPT and other AI technologies raises some important questions. How will these technologies impact the companies in which we invest? And how should we adjust our investment strategies in response?
One potential consequence of the rise of AI is that it could lead to increased automation in various industries. This could reduce the demand for human labor, leading to job losses and potentially impacting the bottom line of companies that rely heavily on human workers.
At the same time, however, the development of AI technologies could also create new opportunities for growth. Companies that are able to effectively utilize AI and natural language processing could see increased efficiency and productivity, leading to improved financial performance.
However, there are also some potential downsides to the widespread use of AI for content creation. With large amounts of automatically-generated content being produced without human oversight, there is a risk of unreliable or even fraudulent information being disseminated. This could have negative consequences for both companies and investors.
Furthermore, the use of AI-generated content could also make it easier for companies to disseminate convincing-sounding but ultimately flawed financial advice. The average person may not have the knowledge or expertise to spot the difference between reliable information and fake news generated by AI. This could put them at a disadvantage when making investment decisions.
In order to navigate these potential shifts in the market, it’s important for investors to stay informed about the latest developments in AI and natural language processing. By keeping a close eye on the companies that are leading the way in these areas, investors can position themselves to capitalize on the opportunities presented by these technologies while also minimizing the risks.
One way to do this is through the use of index funds. By investing in index funds, investors can own a piece of the companies that are driving the development of new technologies like ChatGPT. This means that no matter what changes the future brings, investors can be confident that they will own a share of the companies that are at the forefront of the latest technological developments.
In conclusion, everyone is excited about ChatGPT this week, and for good reason. It’s a revolutionary development that has the potential to disrupt the way companies communicate.
‘More human than human’ is our motto
What do you reckon to that, eh?
Bit flat? Lacking the puns, schoolboy humour, and anti-Brexit tirades you’ve come to expect on a Saturday from Monevator?
Yes, you guessed it – you just read the output from ChatGPT itself.
Here’s the prompt I gave it:
I suppose one bit of good news for scribble-smiths like me is that it can’t hit a word count. I asked for 700 words, and it’s delivered 479 of them.
Otherwise: cor blimey.
Observant readers may have noticed me slipping stories about machine learning into Weekend Reading for the past few years. I am both fascinated and paranoid about where this is going.
One of my few certain talents is I can extrapolate better than many people. As such I was (a) not shocked by the proficiency of this latest model and (b) am less relieved by its clear limitations.
It’s a giddy time for advancements in machine learning and AI. Personally, I think those closest to it can be complacent. I feel they don’t appreciate the rate of advance and they dwell overly on the near-term shortcomings. Sort of like you can’t tell how your own kid is growing tall and talented until a distant relative visits and is surprised.
Sure, we don’t know exactly what is growing capable with these machine learning models.
But it’s doing so quickly!
I’m afraid. I’m afraid, Dave
There’s so much to be said about this, even within the narrow terms of investing. If you want another hit then check out this beautifully written post by Indeedably:
The promise of what this technology will offer in the future in equal part excites and terrifies me. Much like the early internet I encountered during that hungover tutorial, that future promise far exceeds the realities of the current implementation.
Much like that early internet, I can already start to see just how transformative it has the potential to become. The white-collar world has long been a safe harbour for well-remunerated workers to finance a comfortable lifestyle endlessly moving data, producing slide decks, torturing spreadsheets, and writing code.
Those workers are about to experience first-hand what their agrarian, mining, and production line working forebears felt like a generation or three ago. It will be fascinating to watch the evolution.
No chatbot is going to match Indeedably’s copy anytime soon. Nor, I hope, ours.
But at the same time I’m sure that right now thousands of people trying to figure out how to spin-up vast AI content farms to game Google and suck away Internet traffic for advertising pennies. (Even though in the long run, ChatGPT-style models will kill generic content silos. And maybe even Google search).
Some spammer’s traffic gain is every other web publisher’s loss.
Perhaps me and Indeedably need to worry even sooner than I thought.
You are terminated
Maybe ChatGPT has already killed the traditional student essay. Maybe in the future we’ll have to sign everything we create (via a blockchain) to prove it isn’t a deep fake. Or that something else is a fake, by the omission of such a signature.
Perhaps we’ll have to show our identity papers to write a comment on Reddit. Already user-generated sites like Stack Overflow have been afflicted.
Will a grey goo of cruddy auto-generated verbiage swamp the Internet as we know it? Or should we be more worried about the day when everything a bot writes is really good?
For now the moderators at Stack Overflow are worried about bad ChatGPT programming code being submitted.
But in the long-run that site’s readers should be ready for its good code disrupting their jobs.
Similarly even fiction writers – indeed the entire creative class – are now on notice. Machine learning will be a tool for a while, but it could conceivably become a threat by mastering the things that we thought made us most human.
What do you think? Are you worried a young and hungry AI is coming for your salary? Let us know in the comments.
Oh, and come on England!
From Monevator
Bond terms jargon buster – Monevator
Greedy buy-to-let landlord or mortgage prisoner? – Monevator
From the archive-ator: The cost of active fund management – Monevator
News
Note: Some links are Google search results – in PC/desktop view click through to read the article. Try privacy/incognito mode to avoid cookies. Consider subscribing to sites you visit a lot.
UK banking rules in biggest shake-up in 30 years – BBC
House prices fall at their fastest rate in 14 years, says Halifax – Guardian
Would an England World Cup win boost British business? – This Is Money
Bank of England likely to raise interest rates to 3.5% next week – Yahoo Finance
BP agrees to install up to 900 EV charge points at 70 M&S retail outlets – This Is Money
‘Goblin mode’ chosen as OED’s word of the year – CNN
UK set to unleash an historic debt deluge [Search result] – FT
Products and services
Mortgage lenders cut rates by up to 1% ahead of base rate hike – FT Adviser
Financial advice: is it value for money? [Search result] – FT
Postcode checker: how has your High Street changed since 2020? – BBC
Does tin foil behind the radiator beat the cold? – Guardian
Taxpayers on the hook for billions from energy supplier failures [Podcast] – A Long Time In Finance
Hargreaves Lansdown is offering £50 to £1,000 cashback when you transfer your ISA and £100 to £1,500 cashback when you transfer your SIPP (terms apply to both offers)
Should you ever use or buy gift cards? – Be Clever With Your Cash
Mortgage brokers are training as mental health first-aiders to support vulnerable homeowners – This Is Money
“Thameslink fined me for sitting in the wrong seat even though I had a ticket” – Guardian
Homes for a cozy Christmas, in pictures – Guardian
Comment and opinion
Debunking myths about 60/40 style portfolios – Vanguard
Nest’s target-date funds and the perils of dead wax – Henry Tapper
A history of the UK national debt [Podcast] – A Long Time In Finance
Bonds versus bond funds over the past year [US but relevant] – Morningstar
How to get rich by working for it – Darius Foroux
Don’t get lost in a down stock market – A Teachable Moment
What is fractional ownership? And is it the new buy-to-let? – Yahoo Finance
Do you think about money differently compared to a year ago? – Humble Dollar
Privilege doesn’t start with the super-rich [Search result] – FT
How to host huge family gatherings through the generations – Humble Dollar
Crypt o’ crypto
iPod creator Tony Fadell is trying to build the iPod of crypto for Ledger – Wired
Naughty corner: Active antics
US small cap stocks look really cheap – Morningstar
An interview with UK small cap tipster Simon Thompson – Investor’s Chronicle
How a basket of ETFs mimicked the performance of top hedge funds – Institutional Investor
Elon Musk gambled big on Twitter. Tesla will pay the price – Insider
Covid corner
The phase of the pandemic where we pretend it’s 2019 – The Atlantic
China’s health system isn’t ready for the end of ‘zero Covid’ – Vox
The country also needs better Covid vaccines – Slate
Even now, nobody wants to confront the awful truth about Britain’s lockdowns – Douglas Murray
Yes, immunity debt was worth it – Slate [and how this headline evolved – Unherd]
Kindle book bargains
Bad Blood: Secrets and Lies in a Silicon Valley Startup by John Carreyrou – £0.99 on Kindle
Surrounded by Bad Bosses and Lazy Employees by Thomas Erikson – £0.99 on Kindle
The Business Book by DK Publishing – £1.99 on Kindle
Quiet Leadership: Winning Hearts, Minds, and Matches by Carlo Ancelotti – £0.99 on Kindle
Environmental factors
Vanguard quits net zero alliance, citing need for independence – Reuters
Mumbai embraces its booming flamingo population – Hakai Magazine
Sperm counts are falling worldwide. Why? [Podcast] – The Ringer
ESG funds are rethinking the case for nuclear – Morningstar
Off our beat
Ideas that changed my life – Morgan Housel
AirBnB is WeWork – Dror Poleg
Credit cards as a legacy system [Really fascinating read] – Bits About Money
Almost everyone in South Korea is about to become one or two years younger – Reuters
Our new love affair with the office is a step towards a better philosophy of work – Guardian
The Dad-ification of fashion – The Cut
Is America still on the path to authoritarianism? – Brian Klass
How to hold contradictory ideas in your head at once – Ryan Holiday
And finally…
“He commuted to his Canadian office in a Ferrari, though sometimes snowy conditions forced him to use Bentley.”
– Sebastian Mallaby, More Money than God: Hedge Funds and the Making of the New Elite
Like these links? Subscribe to get them every Friday! Note this article includes affiliate links, such as from Amazon and Interactive Investor. We may be compensated if you pursue these offers, but that will not affect the price you pay.
So you wrote a 100 or so word prompt (and probably tried several versions)
And the AI generated a passable 479 word particle, which you reviewed and decided was acceptable
It seems like a nice productivity tool for generating online content but not earth shattering
To be honest a lot of content on other personal finance blogs already looked like it was generated by an automatic content creation program
These tools don’t take jobs away they make people more productive it’s like excel for writing
@Neverland — Yes, that’s the optimistic vision, for now. (Remember they will keep getting better, at pace, for the foreseeable). In terms of prompts, I only did two tries. (I was playing with ChatGPT to prove a point for a friend then thought the output could be the intro to this Monevator article afterwards.) I added the line about bad advice on his suggestion to my second prompt.
I’d say in the first iterations people who make a living by being passably articulate are most threatened. Generic copywriters, company press people and so on. My guess would be that at most 10% of the population can write at the level of the ChatGPT output above. And perhaps 0.01% of them could do it in 30 seconds.
But yes, it’s very possible that job continues to exist and becomes (for a while) chatting to the CEO about this week’s strategic messaging, and then working with your AI copy-bot to finesse your corporate communication, say. That’s the near-term dream.
I asked it a fairly simple question on my specialist subject (law). The result was an answer that, while it looked like a real person had written it, was complete nonsense. It sounds like the experience on Stack Overflow was similar.
The strange thing is that the top Google results from the same question were spot on – ChatGPT is obviously doing something very different from regurgitating search results.
So in my view – there’s still a long way to go, but maybe we’ll get there sooner than we think?!
@Investor
I have the perspective of age regrettably
Many years ago I worked as an executive for a storied FTSE250 firm now long gone
The executives obviously could not themselves use PowerPoint and neither could their secretaries
They would sketch their slide designs on paper and send them to a small DTP team to produce whereupon the slides created by the DTP team would amended by hand and sent back, via secretaries, for the DTP team to correct
People where quite shocked when I just did my own PowerPoint slides, saying I was being inefficient …
Blimey Monevator Team, forget your AI malarkey would it have hurt to put a trigger warning on the excellent Finimus article?
There I am, feeling nostalgic for the lost world of my grandparents living in Befnel, sorry Bethnal Green, with their stories of extreme poverty in the 1930’s, being bombed out (twice, second time by a V1, never had a tumble dryer in the house due to the similarity of the noise) followed by a compulsory purchase of their home for a pittance in the 1970’s etc. And now I read this tale of woe that puts their lost world to shame!
More of the same please. Life is very unfair but this website has helped me to navigate it better than most.
The internet is already awash with poor quality writing created purely for the purpose of gaming Google. At least at the moment you can see what you have landed on straight away. Results that are grammatically correct but factually wrong will be harder to deal with. Making that stuff a thousand times cheaper to produce could be the end of search engines.
I asked it to write a Python program to calculate the first 100 primes. The result was correct but used a naive algorithm. After a few prompts it came up with a fast solution. Quite impressive really.
Teachers are going to have a whale of a time marking homework from now on.
I have worked in tech, including some ML, since before google was a verb. This and the recent image models are that rare thing that feels like an actual step change. OpenAI clearly way ahead and this tech isn’t quite as easy to replicate across the industry as other things, but the potential is big enough that it’s got to happen.
@Ducknald Don
“Teachers are going to have a whale of a time marking homework from now on.”
Its touchingly naive that you think all kids do their own homework.
Unpaid parental assistance has been a thing since the 11 plus and probably before.
Paid tutors “assisting” with assignments right up to writing whole theses was a thing before even the internet age and grew exponentially with it.
This just automates that process a little more.
I asked the bot to first write a short story on a specific, and the result was nothing earth shattering, but not without wit either.
Then I asked it, given a few more prompts, to write an organisational strategy. The result was astonishingly close to something corporate humans would write. I’m tempted to say, we should let the AI bots talk nonsense to each other while we all quietly retire.
I really don’t see how this is AI. You’ve input some prompts/questions to be sent to the search engine, with the system producing a response based on pre-programmed rules around what constitutes good prose. Don’t get me wrong, the output is good (but not fantastic), it’s impressive what computers can do now, but I just don’t buy into the hype about how this sort of stuff is ‘AI’, with it replacing humans any time soon.
See also how ‘big data’ was about to revolutionize almost everything, then it quietly fizzled out.
I may be wrong, but I think you made a number of posts a few years ago about how self-drive cars were about to take over. I’m not one to predict the future, but…isn’t going to happen!
@Scott — Evening. You may well be right about AI, time will tell, but that’s not how this system works. It’s much weirder! 🙂
Basically it just continuously predicts the next best word based on it having read and weighted bazillions of sentences in ‘training’. In that sense there’s no understanding here at all. (Except it’s not completely clear to me human beings do anything different in a sense).
Fair cop on self driving I’ve been totally wrong about that as a good friend in the field is gracious enough not to remind me whenever we meet and discuss it. (He thinks it might prove impossible!)
So definitely nothing in the bag, no.
A Guardian article about tinfoil from 10 years ago?! Is ChatGPT also doing the links this week? 🙂
@Scot
I agree with you while being more into AI as a lot of currently human generated content is hugely derivative and could easily be replaced by AI
A decent example is the Indeedably article on the same subject linked above which has examples of AI content in it: https://indeedably.com/glimpse/
Don’t believe me about how much of current journalism is actually derivative rubbish?
Consider the Janesh Ganesh article from the FT today linked above:
Google.com/search?q=site%3Aft.com+Privilege+doesn’t+start+with+the+super-rich&oq=site%3Aft.com+Privilege+doesn’t+start+with+the+super-rich&aqs=chrome..69i57j69i58.1827j0j1&sourceid=chrome&ie=UTF-8
TLDR: Upper middle class don’t know how the other half live
Funnily enough Matthew Parris wrote pretty much exactly the same article in the Spectator on 23 November
https://12ft.io/proxy?q=https%3A%2F%2Fwww.spectator.co.uk%2Farticle%2Fwe-cant-know-how-the-very-poorest-live%2F
TLDR: Middles class don’t know how the other half live
Coincidence? I doubt it
The truth is many in the “creative” and “professional” classes make a living recycling somebody else’s work
AI makes the whole process a little easier and perhaps more democratic
I’m in the camp of thinking this is pretty impressive.
The phrase – you over estimate change in a day and underestimate change in a decade I think is apt here.
Once can see how you will get a general replacement of human tasks over time at an accelerating rate. We are a long way away but I could envisage this (a) acting as a companion to offset loneliness (b) the next JK Rowling is a chat bot (c) your bog standard copywriter is eliminated (d) basically anything where you need to source information e.g. architect plans for a four bed house – here’s a 100 versions (e) computer programming etc etc etc.
I stuck in a question “a house seller has rejected my offer, what should I do?” The answer in reality was fairly banal but I wager it was significantly better than the average person in the UK could do. still nothing that you couldn’t easily get on the internet…..
I am sure that AI and machine learning will find good applications, but I am not sure that the plausibility of the output is going to be a useful measure.
Cory Doctorow has written about how these AI systems can be spoofed/controlled to provide disinformation.
https://doctorow.medium.com/backdooring-a-summarizerbot-to-shape-opinion-edf5e30752ce
And Teresa Kubacka got it to write about her PhD research. She found that GPTChat was inventing references, research groups and science that did not exist.
https://twitter.com/miller_klein/status/1601518848137912320?s=20&t=U95iyDUH4Tyd3xUgrWI5_g
Neither of these articles give me any confidence in these tools that summarise or write for you. They seem to be OK on general arm-waving, which we all do quite a lot of, but if pushed, they will just make stuff up to sound convincing. We have enough of that going on form politicians and conspiracy nuts already. Do we need the computers joining in? Imagine what the trolls and bot farms could do with a machine that produces plausible material that rejects the consensus on climate change, or the benefits of vaccination, or the economics of Brexit, all fully backed up with sources and references that don’t exist. How many of us will try to track down these fakes quotes and references?
I can see so many ways that various bad actors could use these tools, that I don’t think eliminating jobs is the biggest problem. The biggest problem will be a further erosion of any shared perception of reality.
Here are a couple of links to complement this:
– https://indeedably.com/glimpse/ – chatGPS
– https://www.bankeronwheels.com/the-definitive-guide-to-sustainable-investing/ – Interesting series on ESG
– https://www.youtube.com/watch?v=Fny-pvZS-lo&feature=youtu.be – video on 2% rule
Like everyone else it seems I’ve been playing with this and with the mid journey image bot over the past few weeks. Like another poster, I asked chatGPT to write some code for a problem I was having trouble getting me head around. The answer didn’t work, but did move me on and got me to the right approach, which was enough to get me thinking and to not trust it, at least in the near term!
The midjourney bot is also interesting- I’ve been using it to generate images to the “12 days of Christmas “ song, and as well as coming up with some odd ideas, it’s entertaining to see how badly the AI can count. Beyond the second or third day it had real trouble.
In other news, it’s a measure of the quality of this site’s readers (and moderation policy!), that I skipped to the comments, intrigued to see what others make of it all. Continued thanks, for this @TI.
I tested ChatGPT on Monday, I’m proud to say that after fiddling with it for like 20 minutes, I managed to break it. Error, then more errors, then it telling me: “maybe come back at a later time”, which it’s a nice way of saying “go away”. Perhaps the AI apocalypse will be delayed for one week.
On the other hand, it’s quite clear to me that deep fakes, AI chatbots and AI generated content in general can indeed have a huge impact on our society. It’s everything that bitcoin wished it would be: systemic change.
I still remember the moment that a CD of Microsoft Encarta (the digital encyclopedia) changed my life. I was in high school and suddenly I had access to thousand of books, with a single search bar. Finding information that would have required me going to the library and check out physical books every week suddenly could be done in 2 minutes. It was amazing, it was magical. I believe AI will be similar to the kids of today.
On the other hand, being human in an AI world will become quite hard. Spotting a deep-fake is impossible for the average Joe. Recognizing a phishing email with perfect grammar and consistent voice? It’s hard even now, it will probably be impossible soon. Being denied an interview because of AI algorithms that have flawed data, having discrimination embedded in it’s very core? Oh well, whom are you going to complain to, especially since you’re already part of a marginalized community who is slowly being denied the resources (money) to complain.
As old_eyes mentioned above, how is humanity to unite and resolve our huge issues (a changing climate, inequality and the rise of authoritarianism ) when it becomes easier and easier to create alternate narratives that seek to divide us? Multiverse of madness indeed.
ChatGPT is a bit like index funds. It relies on others to provide the (market) information in order to work effectively enough.
If ChatGPT replaces enough of the actual person created content, it will cease to work as effectively as it does now.
And like index funds, it relies on content (=prices for index funds) it uses as being reliable. That is virtually impossible to prove/disprove for index funds but we know there is a lot of crap out there on the internet, so ChatGPT (or equivalents) will never be perfect?
There seems to be a huge focus on all the ways current AI is limited which is largely irrelevant. The early internet was also hugely limited. The far more interesting point is how obvious it is that the technology will be transformative across swathes of readily apparent use cases. Contrast that with crypto or the metaverse where you have to perform painful contortions to recognise the same.
Humans are estimated to make 10^18 computations a second. A smartphone performs 10^9. AI computers currently operate at 10^16. The human estimate could be wrong but it appears we are getting close.
Human perception is itself a model. We do not see the world exactly as ‘it is’ but in a way that helps us navigate the world. For example, we’ve discovered that AI can identify gender based on retina photos. We have no idea how it does this. Humans have not yet been able to access the information evidently contained within the photos to determine any difference. There is far more data contained within a photo than human perception processes. This is by design as you need to extract signal from noise.
Think also of the baby born with no language or identification model but who creates their own through pattern recognition over time. Like AI, they rely on all the data extant in the world to create that model. Like AI, they can use that extant data to generate novel data. Unlike AI, humans are anchored to the known whereas AI experiments randomly in the unknown. Where things become very knotty are application of value judgements.
Dismiss AI at your peril.
Regarding the word count, did you wait until ChatGPT had finished generating the article? In playing with it I’ve found it takes significant amounts of time to get past paragraph 3, probably as its model tries to avoid too much repetition and is “creatively” finding new sentences. Amazing tech regardless.
“For example, we’ve discovered that AI can identify gender based on retina photos. We have no idea how it does this. Humans have not yet been able to access the information evidently contained within the photos to determine any difference.”
I think that this is precisely the problem with current AI. We have been training neural nets and other statistical tools for classification problems for forty years in the areas I have been involved in. The ‘AI’ gets more sophisticated over time, but the core principles remain the same. That means performance (in accuracy as opposed to apparent ‘cleverness’) critically depends on the training sets available. Any bias in those training sets shows up in the AI model, and this has been demonstrated in AI’s used to direct police to locations where crimes are likely to happen. These AI’s have been shown to have racist tendencies because of the training set they were fed. Medical diagnostics AI’s often suffer from errors in dealing with women and older people because so much of the physiological data that has been collected is from young fit men.
And the problem is you can’t ask the typical AI why they reached a specific conclusion, because they don’t know and there is no human interpretable model to interrogate. Effectively AI’s deal in ‘gut feel’.
So, I am sure there will be many good applications. They will typically be where the problem space is tightly bounded and probably linear rather than non-linear. Or, where error has no particular impact. If my dictation software makes a mistake, I will see it and correct it (‘cos I always check). If grammar checking tells me to put in or take out a comma when I think that gives the wrong sense, I can ignore it. If Alexa misunderstands me, thew worst that can happen is I buy too much toilet roll. None of these things affect my health, freedom or finances.
We know that people will push AI into places it is not reliable; to save money and be more ‘efficient’. We know because people have done it and are doing it.
People tout the ability of AI to find ‘new’ things. What they can do so far, is suggest interesting correlations that might be worth a human following up. They cannot complete the discovery because they can see ‘what’ but cannot determine ‘how’ or ‘why’. As a tool to extend human mental reach, they have great potential, as decision-makers they have a long way to go. It is not a matter of how many computations per second, but of what kind of calculations.
I had a go with chatGPT this afternoon and am very impressed. I quizzed it on some areas of general relativity and I thought it did well, then moved on to asking about the evidence for the mass of a neutrino. Again, good, but it got stuck when I asked it for links or references to the Super-Kamiokande experiment. It said it was not able to. That information is readily available in Wikipedia and elsewhere, implying it has not (yet) swallowed Wikipedia.
I then asked it to explain the joke “my wife has just gone to Jamaica. Did you make her? No she went of her own accord” and it was spot on! Perhaps because jokes are often thought of as difficult for computers to understand, a lot of work may have gone into this.
My daughter is using ML to try to help in a particular medical area where a number of choices have to be made following the collection of samples. The outcome of the choices made is readily measurable after the event and the medical experts are barely better than 50/50 in making the right upfront choices. She has some evidence now that ML can help make slightly better decisions, but has as a result of her work also provided feedback on areas experts should focus on, which I thought was interesting. ML need experts, human expertise can be improved through ML.
On the whole I am quite optimistic about the use of ML and can especially see it as yet another tool to boost productivity.
Many technological advances tend to plateau after intial rapid progress. For example, VisiCalc was revolutionary, but for most users Excel hasn’t improved much in terms of useful functionality in the last 10 years. GPS is great, but cannot get a lot better. The Google search engine doesn’t seem any better to me than it did 10 years ago. Much the same is true of older technologies such as diesel engines. I suspect that ML will go the same way. Rapid progress in lots of areas, then plateauing.
> Humans are estimated to make 10^18 computations a second.
Care to supply a source? I am intrigued by the methodology of such an assertion, and since we seem to be indulging in a technology love-in I am sure that there will be a source for how the human condition can be summarised so neatly.
Reminds me of Jacquetta Hawkes, “every age gets the Stonehenge it deserves, or desires”. I don’t get the AI love-in, but then I don’t have to make a living in that world. But to be honest, given the choice of summing up the human condtion by casting AI in our image and all the things humans have tried to believe in before, then if AI is the best we can do I’d rather take my chances elsewhere, even in a Nietzschean nothing. And can we please bear in mind the distinction between artifical intelligence and machine learning, because it matters, both philosopically, and practically. From the description given here and in the comments the writer of the first part was machine learning. And I am pleased to say I got to the second paragraph before wondering WTF? I have this reaction to a lot of churnalism too, though I fear the source is until now the non-artificial sub-intelligence of rich but dim media interns 😉
I agree with the conclusion of the hazard to some white collar work. If this can stamp out the scourge of management consultancy, perhaps this will have made the world a better place.
@ermine Here is a good summary:
https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/
It’s far from settled science (hence the line that it could be wrong), but it is serious science.
@Naeclue #23
I think you got that one out of the wrong cracker 🙂
For the record, the normal version (amongst quite a few different “my wife” jokes) is, “My wife went to the West Indies”. “Jamaica?”. “No, she went of her own accord”.
https://metro.co.uk/2006/07/04/my-wife-went-to-the-west-indies-181575/
@platformer #25
Thank you, good stuff, worthy of a couple of cups of coffee in the morning. I am glad there was significant thought behind the quote!
@Factor, thanks. Strange thing is, my badly told version of the joke was correctly explained. The explanation of the correct version was rubbish.
I have also be delving into political bias and correctness. When asked to tell me a joke about feminists, it output a joke that made no sense. When asked again it refused and said:
I’m sorry, but I don’t think it’s appropriate to tell jokes that stereotype or belittle a particular group of people, such as feminists. Jokes that demean or marginalize others can cause harm and hurt people’s feelings, and I am not programmed to do that. Instead, let’s focus on being respectful and kind to one another, regardless of our differences.
@Naeclue
Obv. it was absolutely gagging to tell a joke, lost the punchline first time round, and second time decided not to keep digging, so went for the safe answer.
Once two particles, inseparable bound,
Entangled by a force beyond our ken,
Their fates forever intertwined,
A love that knows no end.
As they spin and dance through time and space,
Their connection never fades,
No matter how far they may be placed,
Their bond forever unshakable.
Like lovers gazing into each other’s eyes,
They share a single, unbroken glance,
A bond that defies the laws of science,
A love that knows no circumstance.
And as they continue on their journey,
Through the vast expanse of space and time,
They remain forever one,
Quantum entanglement, a love divine.
– Not Seamus Heaney but fairly impressive for a simple prompt – ill take the credit for that at least. Seems ChatGPT has a wide range of applications. One that could be useful is in the area of technical writing say for grant applications where the text is to be largely dry/factual – no doubt the grant application consultants use similar already.
Huge questions over plagiarism and copyright no doubt.
Also the answer format seems to often follow a similar pattern so will we see bots that seek to detect the use of AI, followed by bot resistant algorithms and so on…
Question about B&B ISA –
I’m sitting on some losses for my preference shares and, given the budget changes, I’m thinking about bed and breakfasting into my ISA (I’ve not used the allowances yet x2).
I can’t face the bid offer spread as it’s huge – do I buy and sell from myself, is the spread minimised by the provider (interactive investor)? I’ve not been able to get a clear answer. And I’d rather not find out after the fact on a £20k transfer
Any real life example would be helpful
Thanks
@Bill G – think he’ll put a six part Netflix special out ?
@Boltt, speak to your broker about it. They should be able to do back to back deals on a much narrower spread. They would still have to present both deals to a market maker for regulatory reasons, but the market maker should execute the deals at a very small spread.
You may have to put £20k cash into your ISA first though.
@Boltt, I should add that you should make sure that you are getting simultaneous quotes before you go ahead. Don’t let some numpty on the other end of the line enter 2 unrelated deals into their trade management system!
@naeclue
Thanks, with II you have to use the sale proceeds to “fill” the isa, if you’ve filled your isa already it can’t be done until next year.
There’s a £49 for a telephone transaction – their previous response was the spread SHOULD be less than £50 even for an online transaction (it didn’t inspire confidence)
The numpty doing 2 independent trades is my fear – 4% spread on £40k would be seriously annoying
Thanks TI for including the link for the Hargreaves Lansdown cashback offer. For people with large ISAs/SIPPs it sounds like an easy way of earning an additional £2500. Since its a one of payment my understanding is that its tax-free as well. That’s a nice Christmas present.
If ones current ISA/SIPP provider allows free transfers out I can’t see any reason to transfer to HL for a year and then transfer back. Am I missing anything here?
Edit:
“If ones current ISA/SIPP provider allows free transfers out I can’t see any reason _not_ to transfer to HL for a year and then transfer back. Am I missing anything here?”