When AI is Just Badly Paid Humans!

When AI is Just Badly Paid Humans!

Show Video

Between 1998 and 1999 147 US listed  companies changed their names to   include dotcom, dot net or the word  internet. Many of these companies’   core businesses were not internet  related – it didn’t really matter…   Concerned with all these announcements, the  director of the SEC’s investor-education   office warned investors not to just invest in  a name, saying, “That is asking for losses.”   This April, the Director of the SEC’s Division  of Enforcement noted in a conference speech that   there was immense investor interest in artificial  intelligence and that fake AI, or AI washing   has the potential to mislead investors, harm  consumers and violates federal securities laws.   FactSet shows that 199 of the S&P500  companies mentioned AI on their first   quarter earnings calls. This is the  highest number of mentions on record,  

with the prior record having been set in 2023. Of course, many of these firms will actually have   an AI strategy, many will have been using AI for  decades, but it does appear to be the buzzword of   2023 and 2024 – having replaced blockchain. AI is of course a real thing – and it’s not   really all that new. It is however new in  things like TV’s, refrigerators, bird feeders,   shoes and dog bowls. I’m not saying that the AI  dog bowl is fake – I went to the website and it  

appears to tell you the temperature and help you  weigh out dog food – but if you would buy an AI   dog bowl – you really would buy anything. No? Whenever a new idea gets really hot you can (of   course) expect pretenders to jump on  the bandwagon. Earlier this year the   SEC announced settled charges against two  investment advisers, for making false and   misleading statements about their purported use  of artificial intelligence. The firms agreed to   settle the SEC’s charges and pay $400,000 in total  civil penalties – without admitting wrongdoing.   Gary Gensler said that the two firms “marketed  to their clients and prospective clients that   they were using AI in certain ways when,  in fact, they were not.” He went on to say,   “Investment advisers should not mislead the  public by saying they are using an AI model when   they are not. Such AI washing hurts investors.” One of the firms it seems claimed that it was “the  

first investment adviser to convert personal  data into a renewable source of investable   capital . . . that would allow consumers to invest  in the stock market using their personal data.”   They went on to say that they use  “machine learning to analyze the   collective data shared by their members to  make intelligent investment decisions.”   I won’t lie, I’m slightly surprised that they  got in so much trouble for statements like that,   as essentially that is a collection  of words that when combined into a   sentence means absolutely nothing. Let’s look at it again. They claim to   be the first (which may be true – as I haven’t  heard anyone else make this claim) - investment   advisor (which they do appear to be) to  convert personal data into a renewable   source of investable capital – personal data into  a renewable source of investable capital- that   doesn’t mean anything does it? That’s nonsense.  Then they said that they’ll allow consumers to   invest in the stock market using their personal  data – which could reasonably mean that they allow   their customers to invest in the stock market  after filling out a form giving their name,   address, date of birth and social security number  – that sort of thing – which is probably true.   Investment advisors have to get that data from  their customers anyhow for tax reasons and as part   of the KYC – or know your customer rule – and it  doesn’t hurt to know where to mail the statements   to either. Then they said machine learning – blah  blah blah – and that’s where they went wrong – as  

it seems there was no machine, and it didn’t learn  anything – and that’s bad… so, don’t do that…   The other firm was accused of falsely claiming  to be the “first regulated AI financial advisor”   and of misrepresenting that its platform provided  “expert AI-driven forecasts.” They also violated   the Marketing Rule, falsely claiming that  they offered tax-loss harvesting services,   and included an impermissible liability  hedge clause in their advisory contract,   among other securities law violations. So yeah, you shouldn’t tell your customers   that you are using Artificial Intelligence  if you are not, just tell them that you are   using 100% natural intelligence. It’s not like  regulators are going to turn up at your office   with an IQ test and force you to issue a  retraction. You should be OK with that.   Before we go any further let me tell you about  today’s video sponsor Surfshark. I have been   using VPN software like Surfshark for quite some  time. Surfshark is an easy to use and affordable  

VPN app for Windows, Mac, Android, iOS, and more. VPN stands for Virtual Private Network and what   it means is that when you use it, all of your  internet traffic goes through a secure tunnel   and is encrypted adding a layer of privacy  protection when you access the internet   from a coffee shop or an airport lounge. A VPN  can help improve your privacy, make it harder   to be tracked online and bypass censorship. Surfshark is not just a great way to protect   your data, but you’ll also find that if you  log into streaming services from different   countries, different content is available. With  Surfshark, no matter where in the world you are,  

you get to take the internet from home with  you. Surfshark is fast, reliable and they   don’t collect or track your data. Surfshark allows you to set up one   account and use it on unlimited devices. Secure your privacy with Surfshark! Enter   coupon code BOYLE for 4 months EXTRA  at surfshark.com/boyle There's a 30  

day money back guarantee so that  there's no risk in trying it out.   AI – as I mentioned earlier is not a new  technology. It was founded as an academic   discipline in 1956 and has gone through  multiple cycles of optimism and periods   of disappointment since then. It has been  used in the financial world for decades,   most famously at Renaissance Technologies, but  also at most quant funds. AI based investment   strategies have used in the world of finance since  long before I started working in the industry.  

It’s not just finance either, neural networks  and computer aided detection software has been   used in medical imaging since the 1980’s  and has been used in clinical roles like   computerized ECG analysis and arterial blood  gas interpretation for quite some time.   The first AI designed drug candidate to  enter clinical trials occurred in 2020.   The spam filter that has been on your email  for the last twenty years uses AI and YouTubes   recommendation algorithm which likely brought  you to this video is an AI based system too,   and it didn’t suddenly appear last year either.  The recent hype around AI – which has been all  

around us for years, was sparked by the public  release of Chat GPT in November 2022 because chat   GPT drew in 100 million monthly active users in  under two months, making it the fastest-growing   consumer application in history, and fast  growth gets VC investors really excited.   A lot of the excitement around Chat GPT  was possibly because it appeared to pass   the Turing test, one of the best-known methods for  assessing AI that grew out of a thought experiment   devised by the computer scientist Alan Turing. The Turing test pits human respondents against a   machine in order to test whether humans can tell  if they are conversing with another human or a   computer. Turing argued that if a computer could  fool people into believing they were conversing   with another human rather than a machine, then it  could be considered intelligent. Matthew Jackson,   a professor at Stanford University wrote  in a paper earlier this year that the   most recent version of ChatGPT passes a  Turing test, only diverging from average   human behavior chiefly to be more cooperative.  Essentially GPT4 is an artificial Canadian.  

The philosopher John Searle argued (quite  correctly in my opinion) that Turing's test   is insufficient to detect the presence of  consciousness. A computer can be programmed   to perform certain parlor tricks – but  that does not mean that it has a mind,   understanding, or consciousness. The fact that the Turing test held   such a position in the public imagination as the  hurdle for true artificial intelligence might be   why people are so excited about chatbots  but ignored all of the other breakthroughs   in the field over the last few decades. There are all sorts of ridiculous devices being   sold as AI products like the Rabbit R1, a handheld  AI device that sold out its first production run   in just one day. Investigators like Coffeezilla  found that it was not using a new foundational  

AI model as claimed but instead Chat GPT mixed  in with some hardcoded scripts. [Clip – Part of   rabbits code says I will never mention that I  am a large language model created by Open AI]   There was also the Humane AI pin, that was  supposed to do similar things and was just   awful. [MKBHD Clip] Over the last year and  a half we’ve have seen AI companies faking   product demos – in the same way website  demos were faked 25 years ago during the   dot com bubble. It’s only so surprising then,  that a recent study found that tacking an AI label  

on products like TV’s and refrigerators lowers  the average customer's willingness to buy it.   There are some great examples of fake  AI products. Bloomberg wrote in 2016   about the workers who spent twelve hours a  day pretending to be chatbots for a calendar   scheduling service called X.ai (not the Elon  Musk X.ai another one – seems it is a common  

name.) The workers at Xai described the job to  Bloomberg as being so awful that they were looking   forward to eventually being replaced by bots. A London based startup which recently claimed   to use AI to read through images of your  receipts digitizing them and storing them   on an app was recently accused of outsourcing  the work to a virtual data extraction team to   manually read the receipts and enter the data. Similarly, in 2017, a business expense management   app Expensify admitted that it had been using  humans to transcribe receipts it claimed were   being processed using AI. Scans of the receipts  were apparently being posted to Amazon’s   Mechanical Turk crowdsourced task  completion tool, where low-paid   workers were doing the actual work. Now the name of Amazon’s mechanical Turk   website has an entertaining origin, there is  a good book on it which I ‘ll link to in the   description. The original mechanical Turk was a  fraudulent chess-playing automaton built in 1770,  

which seemed to be able to play a strong game  of chess against human players. It was brought   all around the world by its owner, playing  against people like Napoleon and Benjamin   Franklin. Its owner would open it up to display  the complicated clockwork mechanism inside,   but it was later discovered to have a human  chess master hiding inside, working the machine.  

Amusingly Amazon had a second Mechanical Turk, its  Go - cashierless stores – which were branded as   Amazon Fresh in the UK. They used Amazon’s “just  walk out technology” which they said used computer   vision, deep learning algorithms, and sensor  fusion which meant that customers could select   items from the shelves – and without ringing  anything up they could “just walk out” and would   see the items ring up in their Amazon account. Last year Amazon began closing some of the stores,   and this April The Information reported that  the technology, had partially relied on more   than 1,000 people in India who were watching  camera footage and labeling videos because the   underlying technology just didn’t work. Instead of AI taking peoples   jobs – it just outsourced them to India… A friend of mine who is an engineer pointed out   a while ago that when you see humanoid robots –  which have had a recent resurgence - making human   like gestures, such as turning their heads to  see something, you should instantly be skeptical,   as it is much cheaper and more efficient to put a  circular array of cameras, or a 360 degree camera   in the robot than it is to install all of the  motors needed to turn the robots head. Human-like   actions are just there to impress investors, they  are not there for the purposes of functionality.  

Robotics experts mostly agree that humanoid –  or animal shaped robots make no sense because   biomimicry just isn’t the right approach for any  sort of industrial robot. Possibly the funniest   example of this is in a photo a friend sent me  from an empty office building where the landlord   is trying to attract high tech tenants. The image  in the lobby of the building shows robots working   alongside humans in a modern looking office,  with a robot sitting at a desk typing on a   computer keyboard. Why would a robot ever use  a keyboard? This is one computer connecting  

to another computer using the most inefficient  interface imaginable. It’s not well thought out…   Factories are of course filled with robots that  can lift heavy parts, weld, sew things together,   tighten bolts and so on – but they don’t have  to be human shaped. You wouldn’t design a   sewing machine to look like a person holding  a needle and thread, so why would you design   a factory robot to look like a person? I worry that if the people building these   humanoid robots had been tasked with building a  car 140 years ago, instead of building an engine   connected directly to the wheels, they would have  tried to build a mechanical horse to pull a cart.  

As I mentioned earlier there is a huge increase  in the number of companies mentioning AI on their   earnings calls. According to Brownstone research,  the company who mentioned AI the most was intel,   where the CEO mentioned AI more than thirty  times on their fourth quarter 2023 call.   Despite all of the talk of AI, Intel fell behind  their competitors because – according to Reuters   “for more than two decades, they believed the CPU  could more effectively handle the processing tasks   required to build and run AI models which left  them lagging behind their competitors in building   GPU’s. Talking a lot about AI does not necessarily  mean that a company is at the cutting edge of AI.   I’m not sure what to make of all the tech  CEO’s with their sci-fi claims that we are   on the cusp of developing Artificial General  Intelligence which will destroy us all. They  

sign letters saying that AI research should be  halted for safety reasons while rushing to build   their own models that break all of the rules  that they claim should be followed. I can’t   help but wonder if they feel that claiming  that the technology is dangerous will make   investors believe that the technology is much more  advanced than it actually is, which might drive up   their stock prices and executive compensation. Big breakthroughs in AI and quantum computing   can be expected to have both pros and cons. A  sudden breakthrough in computing power might  

mean that all of the encryption tools that we use  today, can be easily broken, but that has always   been the nature of technological advancement,  where new things are better than old things,   but smart people work out solutions to these  new problems and the world slowly gets better   over time. Due to modern technology, a middle  class American today has access to comforts,   education and healthcare that the wealthiest man  in the world didn’t have a hundred years ago.   There is a very good New Yorker article from  last year written by the computer scientist   Jaron Lanier who points out that people wrote  all of the code used in generative AI models,   and people wrote the text and created the  images that the models are trained on. The   new programs mash up this work and the results  are surprising and often striking. The non  

repeating nature of these creations can make  the software feel alive but while this is a   significant achievement and worth celebrating—it  should be thought of as illuminating previously   hidden concordances between human creations,  rather than the invention of a new mind.   He talks in the article about how much better  it can be when computer interfaces become less   rigid – using natural language prompts for  example. We have gotten used to software   that requires us to conform to it – for example,  forms that won’t let you hit submit if you haven’t   filled them out the way the code requires. This  requirement for humans to conform to the needs of   software creates a feeling of human subservience  to computers. The way these new AI tools work,   means that we can imagine websites that  reformulate themselves on the fly, tailoring   themselves to a user’s particular cognitive  abilities and styles. He argues that this   flexibility that AI provides, might give us back  more agency over these tools – a very different   vision to the matrix or terminator future  that other tech visionaries seem to expect.  

At present the technologies that are most  hyped are shockingly expensive. The FT   recently described generative AI as the biggest,  and the fastest infrastructure rollout in history,   which leaves us with the big question  of who will eventually benefit the most   from all of this spending and when will  the returns on investment be realized.   Infrastructure plays like Nvidia are often the  early winners when a new technology is being   rolled out but so far no company has yet created  a “killer app” for generative AI – or at least one   that people will pay money for - despite having  sold that dream to investors. While Nvidia is   riding high right now – the infrastructure  plays of the early internet like Cisco,   EMC, Corning and JDS Uniphase didn’t  turn into great long-term investments   if you invested during the hype phase. The big tech firms who are pumping money   into AI are all profitable businesses, while  you might think of Facebook as a social network,   google as a search engine and Amazon as an  online retailer, they are all mostly in the   advertising business. They make their money  selling advertising and are then pumping it  

into an AI moonshot hoping that they will  find a way of turning a profit out of it,   but at present we don’t know where those profits  will come from. They haven’t explained that yet.   Huge tech spending in the past has sometimes,  but not always worked out. Investors are still   waiting to see returns on the investment in  VR headsets, the metaverse and blockchain.  

I started out telling you about the SEC warning  investors during the dot com bubble to not just   invest in a company because they add dot  com to their name. Well, it turns out the   SEC was wrong about that. A 2001 paper in the  Journal of Finance – called a rose dot com by   any other name - found that while name changes  generally don’t affect the value of a company,   there had been dramatic increases in both stock  price and trading volume for the companies who   took on internet related names during that  two year period. The professors found that   the companies that changed their names rose an  average of 53 percent over the five days after   the announcement date. Over fifteen business  days, the stock prices rose by – and I’m   not making this up – four dollars and twenty  cents per share. They found that the returns  

were similar across all firms, regardless of the  company’s actual involvement with the Internet.   The professors checked if the price changes  could be explained by the small company effect,   by the beta of the stocks or by a momentum  effect. They compared the returns on these   stocks to the returns of real internet companies  who didn’t have internet related names – in   case the returns were just sector bias. Their  results were robust. So maybe in an AI bubble,   fake AI companies will do just as well as real  AI companies – maybe Zuckerberg should quickly   rename Meta while he still has a chance – the  old name isn’t doing him any favors right now.   By September 2000, the New York Times  reported that companies were dropping   the E’s the I’s and the dot coms from  their names – times had changed. No one   wanted to be a dot com in the early 2000’s. If you enjoyed this video – you should watch  

my video “Will AI make you obsolete” next.  Don’t forget to check out our sponsor surf   shark using the link in the description, have  a great day and talk to you again soon, bye.

2024-08-28 00:18

Show Video

Other news