• Moderators, please send me a PM if you are unable to access mod permissions. Thanks, Habsy.

Tech Thread

I don't know if there is a historic parallel for this:


View: https://bsky.app/profile/techmeme.com/post/3llpestdejl2l

But this would value OpenAI as the 34th most valuable company on the planet....ffs. Worth more than Samsung, ASML, Toyota, etc

A quick refresher:

- OpenAI has a 5 Billion dollar a year burn rate at the moment with no stated path to profitability
- Their revenue from operations comes entirely from subscription, but they lose money on the subscriptions because of how wildly expensive LLM AI models are to run for each query
- There are multiple competing models, including open source models that in many tasks are competitive in quality

This is pure insanity.
 
It is crazy. Though ChatGPT's still the leader in this field and obviously AI is going to be used by pretty much every company eventually. Aso partnered with Microsoft which opens up a lot of resources and infrastructure.
 
obviously AI is going to be used by pretty much every company eventually.

I know it's weird that I have a contrarian opinion on something, but I don't think this ends up sticking. I don't disagree that most companies will try to use AI, some of them will even make drastic changes to their organization because they'll be convinced AI will be able to replace large chunks of their workforce, but I don't think it's going to stick and firms that jump in with both feet are going to suffer badly.

Over the last year the sales chatter in the AI industry has gone from "AI is going to take your job" to "People who know how to use AI are going to take your job" because the truth about LLM AI is that it requires human oversight, and not just idiot human oversight, but someone skilled enough in the specific tasks it's being given to know when the AI is wrong...because they're wrong a shocking amount of the time and throwing more compute power at the models hasn't improved it's accuracy. They straight up hallucinate, a lot. So giving it any sort of task that could cost a company money or reputation if the task is done poorly, well, is a terrible idea.

AI products will be developed and forced on people, but they're currently all varying degrees of shit and the improvements we were promised are very slow coming. This really isn't that different than autonomous driving (something we've been promised is around the corner for a decade now). Can it lane assist you from a sideswipe on the highway? Yeah. Can you sleep on the way to work? No, and there appears to be nothing coming any time soon to change that)

Aso partnered with Microsoft which opens up a lot of resources and infrastructure.

Microsoft actually just decided to pull the cord on a bunch of data centre leases:

 
I know it's weird that I have a contrarian opinion on something, but I don't think this ends up sticking. I don't disagree that most companies will try to use AI, some of them will even make drastic changes to their organization because they'll be convinced AI will be able to replace large chunks of their workforce, but I don't think it's going to stick and firms that jump in with both feet are going to suffer badly.

Over the last year the sales chatter in the AI industry has gone from "AI is going to take your job" to "People who know how to use AI are going to take your job" because the truth about LLM AI is that it requires human oversight, and not just idiot human oversight, but someone skilled enough in the specific tasks it's being given to know when the AI is wrong...because they're wrong a shocking amount of the time and throwing more compute power at the models hasn't improved it's accuracy. They straight up hallucinate, a lot. So giving it any sort of task that could cost a company money or reputation if the task is done poorly, well, is a terrible idea.

AI products will be developed and forced on people, but they're currently all varying degrees of shit and the improvements we were promised are very slow coming. This really isn't that different than autonomous driving (something we've been promised is around the corner for a decade now). Can it lane assist you from a sideswipe on the highway? Yeah. Can you sleep on the way to work? No, and there appears to be nothing coming any time soon to change that)



Microsoft actually just decided to pull the cord on a bunch of data centre leases:

I think you were out in front on the "they are over-promising and seriously under-delivering so far" part of the discussion, but it's definitely starting to make an impact and this will only continue to grow IMO. Now that's not to say it's going to revolutionize everything in the way they like to pretend, but it's here to stay.

Just from my own experience, where I haven't sought it out and don't have any subscriptions, I've still already used it to re-write a cover letter, do some basic research, and make some designs. Also finding the Google AI overview pretty useful so far. Not every time, but it's getting better.

It's still definitely possible OpenAI is wildly overvalued right now. I just don't think we know yet. I am pretty sure it's going to be used everywhere and in ways we don't even conceive of yet.
 
I think you were out in front on the "they are over-promising and seriously under-delivering so far" part of the discussion, but it's definitely starting to make an impact and this will only continue to grow IMO. Now that's not to say it's going to revolutionize everything in the way they like to pretend, but it's here to stay.

Just from my own experience, where I haven't sought it out and don't have any subscriptions, I've still already used it to re-write a cover letter, do some basic research, and make some designs. Also finding the Google AI overview pretty useful so far. Not every time, but it's getting better.

It's still definitely possible OpenAI is wildly overvalued right now. I just don't think we know yet. I am pretty sure it's going to be used everywhere and in ways we don't even conceive of yet.

My question there though is: What's the business case?

Yeah, it's can summarize stuff, do an okay job at writing basic documents, do basic research (again, as long as you don't mind double checking every citation to make sure it didn't invent them...which is common), etc. But how much would you, or anyone pay for that? The investment into this stuff is going to hit the ~500 Billion mark pretty soon, and 1 Trillion dollars has been thrown around as what is going to be required before whatever this is becomes mature enough for whatever they think it's going to be able to do in the future.

OpenAI released a benchmark score last year called "SimpleQA" which was meant to help them determine their models accuracy in what it's saying. Their o1 model, which at the time was their top model, was accurate 42.7% of the time in it's statements. lol. It's all marketing mate, this shit is useless for anything important. The answer from the industry has always been more compute power and more advanced models, but none of it is fixing the core issue. At this point it's mostly big tech unwilling to admit a sunk cost, so they're going to jam it into as much of our lives as possible to try to force a return on investment. It's enshittification squared.
 
My question there though is: What's the business case?

Yeah, it's can summarize stuff, do an okay job at writing basic documents, do basic research (again, as long as you don't mind double checking every citation to make sure it didn't invent them...which is common), etc. But how much would you, or anyone pay for that? The investment into this stuff is going to hit the ~500 Billion mark pretty soon, and 1 Trillion dollars has been thrown around as what is going to be required before whatever this is becomes mature enough for whatever they think it's going to be able to do in the future.

OpenAI released a benchmark score last year called "SimpleQA" which was meant to help them determine their models accuracy in what it's saying. Their o1 model, which at the time was their top model, was accurate 42.7% of the time in it's statements. lol. It's all marketing mate, this shit is useless for anything important. The answer from the industry has always been more compute power and more advanced models, but none of it is fixing the core issue. At this point it's mostly big tech unwilling to admit a sunk cost, so they're going to jam it into as much of our lives as possible to try to force a return on investment. It's enshittification squared.
I generally agree with you that up until this point it's been underwhelming, combined with over-the-moon hype. It's only just begun though; maybe we can agree that it's too early to know anything definitive yet. It could turn out to be useful in a thousand different ways and yet still wildly overvalued.

We know that they thought they'd need insane amounts of power until DeepSeek entered the chat. So overvalued right now yes, though OpenAI-Microsoft is a pretty strong team with the ability to pivot in just about any direction the tech leads them in.

Still think we should be able to do something like a Mars probe way easier with AI, and just off the top of my head I think it's going to revolutionize medicine and coding for starters. I don't think it's a bad thing if it still needs adult supervision either. There are a shitload of possibilities here of which we've only just begun to consider scratching the surface.
 
AI (if you want to call it that) should be good for small businesses and contractors. AI can do emails, reports, and contracts pretty easily and quickly.

If you send out newsletters or bulk emails, you might not have to hire a new person to accomplish that task and get AI to do the mock ups and send the emails.

Hell, it might be able to do SEO and SEM for you pretty competently so that you don't need to hire a company for that.

I think that is where the utility of machine learning will change how people do business, just like the smartphone.
 
Unsurprisingly, AIs best utility and biggest success and moneymaker will be through the use of porn. Everyone thinking all these big important things will come of it, but it'll just end up being porn as usual.

That and medicine. There is a lot of potential in medicine. In fact, it's already better than most family doctors at diagnosing or narrowing things down.
 
Unsurprisingly, AIs best utility and biggest success and moneymaker will be through the use of porn. Everyone thinking all these big important things will come of it, but it'll just end up being porn as usual.

That and medicine. There is a lot of potential in medicine. In fact, it's already better than most family doctors at diagnosing or narrowing things down.
Not out of the realm of possibility that just these 2 things are both trillion dollar entities over time.
 
Not out of the realm of possibility that just these 2 things are both trillion dollar entities over time.

In medicine, at best it becomes a database scraping tool that a doctor plugs diagnostic patient data into for a broader look at potential conditions that the Doctor then takes the output from and goes from there. Think of it like a research assistant that gets high and hallucinates a lot. Subscription service at best with a few market participants, maybe with models tweaked to be more accurate within more definited data sets/specialties.

Porn, well, as always the sky is the limit for porn.
 
In medicine, at best it becomes a database scraping tool that a doctor plugs diagnostic patient data into for a broader look at potential conditions that the Doctor then takes the output from and goes from there. Think of it like a research assistant that gets high and hallucinates a lot. Subscription service at best with a few market participants, maybe with models tweaked to be more accurate within more definited data sets/specialties.

Porn, well, as always the sky is the limit for porn.

AI today (and in the near future)​

Currently, AI systems are not reasoning engines ie cannot reason the same way as human physicians, who can draw upon ‘common sense’ or ‘clinical intuition and experience’.12 Instead, AI resembles a signal translator, translating patterns from datasets. AI systems today are beginning to be adopted by healthcare organisations to automate time consuming, high volume repetitive tasks. Moreover, there is considerable progress in demonstrating the use of AI in precision diagnostics (eg diabetic retinopathy and radiotherapy planning).

AI in the medium term (the next 5–10 years)​

In the medium term, we propose that there will be significant progress in the development of powerful algorithms that are efficient (eg require less data to train), able to use unlabelled data, and can combine disparate structured and unstructured data including imaging, electronic health data, multi-omic, behavioural and pharmacological data. In addition, healthcare organisations and medical practices will evolve from being adopters of AI platforms, to becoming co-innovators with technology partners in the development of novel AI systems for precision therapeutics.

AI in the long term (>10 years)​

In the long term, AI systems will become more intelligent, enabling AI healthcare systems achieve a state of precision medicine through AI-augmented healthcare and connected care. Healthcare will shift from the traditional one-size-fits-all form of medicine to a preventative, personalised, data-driven disease management model that achieves improved patient outcomes (improved patient and clinical experiences of care) in a more cost-effective delivery system.

Connected/augmented care​

AI could significantly reduce inefficiency in healthcare, improve patient flow and experience, and enhance caregiver experience and patient safety through the care pathway; for example, AI could be applied to the remote monitoring of patients (eg intelligent telehealth through wearables/sensors) to identify and provide timely care of patients at risk of deterioration.

In the long term, we expect that healthcare clinics, hospitals, social care services, patients and caregivers to be all connected to a single, interoperable digital infrastructure using passive sensors in combination with ambient intelligence.31 Following are two AI applications in connected care.
 
Yeah, I'm relatively familiar with what people think it's going to do. What I'm saying is that the last year or so of results have put the usefulness of the entire project into question. The early problems with LLM models were the gross power inefficiencies and the data accuracy. The industry told the rest of us that the solution was to 1) Throw way more compute power at it, and thus 10's-100's of Billions have been thrown at massive data centres to the joy of Nvidia shareholders & 2) Bigger training data sets, which has also been achieved and not always legally.

The improvements from those solutions have been marginal. So while deepseek might have cracked the code on how to have each query not cost a fortune, the other problems might not have solutions, they might be inherent to the design of all LLM models.

Stop and think about it for a second. Would you let a LLM model with a 42.7% accuracy rating within 1000 miles of your health care choices?


1743487368139.png

So that's from November, but that's also OpenAI's most advanced model. They have o1 Pro now...which is the same o1 but with "moar compute!".

This isn't an industry in it's infancy either. LLM's were invented in the 50's, they've been training various AI models on publicly available data on the internet, since the invention of the internet, and now we're talking about companies who have been ridiculously well capitalized like OpenAI now a decade old and 20+ Billion in investor capital burnt through (plus billions more in costs subsidized by Microsoft), billions more thrown at Deepmind, Grok, Claude, etc and the best of the best is wrong half the time (at a cost of approx $750 per query).

It's a shitty product that is years and 100's of billions of dollars in additional investment (with no clue on how to monetize it while it's shit) away from being useful for anything important, if it's ever useful for anything important. On top of that, most researchers I'm aware of don't think that LLM's are a viable pathway to AGI at all, so this might just end up being a Trillion dollar dead end that can do some soft creative writing for you.
 
As my wife is in Family Medicine differential diagnosis is the easiest part her job. The hard part is managing patients with metal health issues, who won't take their meds, with abusive spouses, with no money for physio, with the nearest available specialist appointment 3 years away, etc etc. Not to mention the unpredictable ways in which patients report their symptoms, or don't report at all. It's this wraparound type service that AI is just not (yet) built for. It has been excellent at transcribing and summarizing visit however, which has been great.
 
Back
Top