ChatGPT was announced November, 2022 - 8 months ago. Time flies.
Question for HN: Where are we in the hype cycle on this?
We can run shitty clones slowly on Raspberry Pi's and your phone. The educational implementations demonstrate the basics in under a thousand lines of brisk C. Great. At some point you have to wonder... well, so what?
Not one killer app has emerged. I for one am eager to be all hip and open minded and pretend like I use LLMs all the time for everything and they are "the future" but novelty aside it seems like so far we have a demented clippy and some sophomoric arguments about alignment and wrong think.
It did generate a whole lot of breathless click-bait-y articles and gave people something to blab about. Ironically it also accelerated the value of that sort of gab and clicks towards zero.
As I am not a VC, politician, or opportunist, hand waving and telling me this is Frankenstein's monster about to come alive and therefore I need billions of dollars or "regulations" just makes folks sound like the crypto scammers.
Please HN, say something actually insightful, I beg you.
I work in tech diligence so I look at companies in detail. I have seen a couple where good machine learning is going to make a massive difference (whether it will keep them ahead of everyone is a separate question). I think it really boils down to:
"Is this a problem where an answer that is mostly right and sometimes wrong is still a great value proposition?"
This is what people don't get. If sometimes the answer is (catastrophically) wrong, and the cost of this is high, there's no market fit. So I think a lot of these early LLM related startups are going to be trainwrecks because they haven't figured this out. If the cost of an error is very high in your business, and human checking is what you are trying to avoid, these are not nearly as helpful.
I looked at one company in this scenario and they were dying. Couldn't get big customers to commit because the product was just not worth it if it couldn't be reliably right on something that a human was never going to get wrong (can't say what it was, NDAs and all that.) I also looked at one where they were doing very well because an answer that was usually close would save workers tons of time, and the nature of the biz was that eliminating the human verification step would make no sense anyway. Let's just say it was in a very onerous search problem, and it was trivial for the searcher to say "wrong wrong wrong, RIGHT, phew that saved me hours!". And that saving was going to add up to very significant cash.
So killer apps are going to be out there. But I agree that there is massive overhype and it's not all of them! (or even many!)
That's interesting. Quite the needle to thread. I wonder how big the market will be for niche models that aren't commodities.
It needs to be something lucrative enough that training the model is not-trivial but not so lucrative Microsoft/Google would care enough to go after. And it somehow needs to stay in that sweet spot even as Nvidia chips away at that moat with each new hardware generation.
I'll say that I pretty firmly disagree with this. I've been using Github Copilot for about six months for my own work and it has fundamentally changed how I write code. Ignoring the ethics of Copilot, if I just need to read a file with some data, parse it, and render that data on screen, Copilot just _does_ most of that for me. I write a chunky comment explaining what I want, it writes a blob of code that I tab through, and I'm left with a nicely-documented, functioning piece of software. A one-off script that took me 30 minutes to write previously now takes me maybe a minute on a bad day.
For ages we've had Text Expander and key mappings and shortcuts and macros that render templates of pre-built code. Now I can just say what I'm trying to do, the language model considers the other code on the page, and it gets done.
If this isn't a "killer app" then I'm not sure what is. In my entire career I can think of maybe two things that I've come upon that have affected my workflow this much: source control and continuous integration. Which, frankly, is wild.
Separately, I use LLMs to generate marketing copy for my side hustle. I suck at marketing, but I can tell the damn thing what I want to market and it gives me a list of tweets back that sound like the extroverted CMO that I don't have. I can outsource creative tasks like brainstorming lists of names for products, or coming up with text categories for user feedback from a spreadsheet. I don't know if I'd call either of those things "killer apps" but I have a tool which can do thinking for me at a nominal cost, quickly, and with a high-enough quality bar that it's usually not a waste of my time.
My friend made a great comparison that seems to agree with your take: chatGPT for coding is like when ruby on rails came out. Or wordpress. felt magical and boosted (a certain kind of) productivity through the roof.
We don't think of rails as the second coming though.
same with code editors. of course a rails for all of code is cool. but iono, it's a code editor. i still use sublime.
I'd maybe make the analogy that it's like the first ORM. Sure, you could write your own DB queries, but it just does what you want, and it's usually right.
Were ORMs the second coming? Meh. But it's arguable that they're still immensely powerful and useful and the way people write apps that interface with an RDBMS is permanently changed forevermore.
How did WordPress boost productivity? Fussing with hosting, CMS, plug-ins is a mess. I just went back to good old hand written HTML with pico.css. Got my site down from 8mb to 100kb
Most people cannot write good old hand written HTML; when WordPress came out and picked up stream, it was the biggest thing to hit the web hosting industry since FrontPage.
I think the microsoft gpt integration on Office is probably that app.
Ability to ask to have your email's summarised, or getting your excel sheets formulas configured with natural language, etc are increidbly useful tools to lower the floor of entry to tools that already speed up humans so much.
I don't think the use of this tools is some life redefining feature, but a friend of mine joked that in a year from now you will right a simple sentence like "write polite work email with following request: Come to the meeting, you are late" then Gpt will write the email, another gpt will send it and his GPT will sumarise it and he will reply with another gpt message instantly apologising that you will read the summary off. Leaving a trail of polite long messages that no one will even open.
Got a good chuckle from me. I find that in quick daily back-and-forths, time saved by such a system would be negligible. In many places I've worked, the 'polite work mail' has gone out the door long ago, already at the lower bound of what is considered a proper sentence.
It’s true that sometimes people repeat mistakes of the past by iterating on a fundamentally bad idea.
But sometimes the idea wasn’t bad. The mistake of the past could have been in execution of the idea or tech limitations.
When any new VR product is released, I could post a link to the article for the Nintendo Virtual Boy and make a snarky remark about how successful that was. That doesn’t really add anything though.
There was a science fiction story about this, with phone auto-message and auto-answer systems connecting with each other long after all the humans were dead.
Can we stop acting like the Gartner "hype cycle" is anything more than a marketing gimmick created Gartner to validate their own consulting/research services?
While you can absolutely find cases that map to the "hype cycle", there is nothing whatsoever to validate this model as remotely accurate or valid for describing technology trends.
Where is crypto in the "hype cycle"? It went through at least 3 rounds of "peak of inflated expectation" and I'm not confident it will ever reach a meaningful "plateau of productivity".
Did mobile ever have "inflated expectation"? Yes there was a lot of hype in the early days but those people hyped about it, rushing to build mobile versions of their websites... were correct.
The "hype cycle" is a neat idea but doesn't really map to reality in a way that makes it useful. It's only useful for Gartner to create an illusion of credibility and sell their services.
> The "hype cycle" is a neat idea but doesn't really map to reality in a way that makes it useful.
What do you propose as a more accurate alternative, or do you think that the whole idea should be scrapped? Because personally I feel like certain tech/practices certainly go through multiple stages, where initially people expect too much from them and eventually figure out what they're good for and what they're not.
Not always a single linear process, like NFTs/crypto refusing to die despite numerous scams out there and projects that seem to go nowhere, yet people still falling for the scams due to promised profits. However, the amount of people critiquing the blockchain as a crappy database seems to suggest at least some lessons learnt along the way and hopefully some actually decent use cases.
Gartner are so great at their job you think they own the concept of hype cycles and rage against them being mentioned while being the one to introduce them to the conversation in the first place :)
That 8 months seems like a long time to you is indicative of just how fast tech has been moving lately. I expect at least another year before we have a good sense for where we actually are, probably more.
However, I'll hazard a guess: I think we haven't seen many real new apps since then because too many people are focused on packaging ChatGPT for X. A chatbot is a perfectly decent use case for some things, but I think the real progress will come when people stop trying to copy what OpenAI already did and start integrating LLMs in a more hands-off way that's more natural to their domains.
A great example that's changed my life is News Minimalist [0]. They feed all the news from a ton of sources into one of the GPT models and have it rate the story for significance and credibility. Only the highest rated stories make it into the newsletter. It's still rough around the edges, but being able to delegate most of my news consumption has already made a huge difference in my quality of life!
I expect successful and useful applications to fall in a similar vein to News Minimalist. They're not going to turn the world upside down like the hype artists claim, but there is real value to be made if people can start with a real problem instead of just adding a chatbot to everything.
> Not one killer app has emerged. I for one am eager to be all hip and open minded and pretend like I use LLMs all the time for everything and they are "the future" but novelty aside it seems like so far we have a demented clippy and some sophomoric arguments about alignment and wrong think.
In my mind I divide LLM usage into two categories, creation and ingestion.
Creation is largely a parlor trick that blew the minds of some people because it was their first exposure to generative AI. Now that some time has passed, most people can pattern match GPT-generated content, especially one without sufficient "prompt engineering" to make it sound less like the default writing style. Nobody is impressed by "write a rap like a pirate" output anymore.
Ingestion is a lot less sexy and hasn't gotten nearly as much attention as creation. This is stuff like "summarize this document." And it's powerful. But people didn't get as hyped up on it because it's something that they felt like a computer was supposed to be able to do: transforming existing data from one format to another isn't revolutionary, after all.
But the world has a lot of unstructured, machine-inaccessible text. Legal documents saved in PDF format, consultant reports in Word, investor pitches in PowerPoint. And when I say "unstructured" I mean "there is data here that it is not easy for a machine to parse."
Being able to toss this stuff into ChatGPT (or other LLM) and prompt with things like "given the following legal document, give me the case number, the names of the lawyers, and the names of the defendants; the output must be JSON with the following schema..." and that save that information into a database is absolutely killer. Right now companies are recruiting armies of interns and contractors to do this sort of work, and it's time-consuming and awful.
Isn’t the summarization of text like legal documents where the notion of hallucinations come in as a huge blocker?
Is the industry making progress on fixing such hallucinations? Or for that matter the privacy implications of sharing such documents with entities like OpenAI that don’t respect IP?
Until hallucinations and IP/PII are fixed I don’t want this technology anywhere near my legal or personal documents.
Tasks like summarization and translation get extremely low hallucinations. The more a model "doesn't know" and "has to guess", the more it hallucinates. This isn't much of a problem with what i like to call "morphing" tasks.
>Until hallucinations and IP/PII are fixed I don’t want this technology anywhere near my legal or personal documents.
Is it fair to say these deals claimed to have been closed by the worlds largest law firms using OpenAI backed tooling double check all outputs at their own expense? Could this be a marketing stunt versus a real world usage that actually saved the firm money or time?
I've been using the ChatGPT API to do summarization of text from free-form documents. Not in the legal domain though, so no real regulatory risks. It works very well. I didn't see any hallucinations when spot checking, though of course I can't rule it out. But even if it only gets things 98% correct, that accuracy is good enough for my use case, and being able to programmatically feed these documents in instead of hiring multiple contractors to read through and parse out the data is a massive, massive time and money saver.
> Or for that matter the privacy implications of sharing such documents with entities like OpenAI that don’t respect IP?
Their permissions/organization model is a mess, but ChatGPT does offer the ability to opt out of data collection, at least for corporate accounts.
ChatGPT has already put some copywriters and journalists out of work, or at least reduced their hours. The app is quite literally “killing” something, i.e. people’s jobs. For those people, it’s not just empty hype. It’s very real. Certainly it’s already more real than anything having to do with blockchain/crypto.
I'm dubious. The few news websites that started publishing LLM articles (CNET, etc) were already circling the drain. They'd probably have fired their journalists anyway because they're on the edge of bankruptcy.
I expect that, over the next few years, companies that need to lay off workers will spin their mismanagement by claiming they are replacing those jobs with "AI".
The killer app for large enterprises is Q&A against the corporate knowledgebase(s). Big companies have an insane amount of tribal knowledge locked away in documents sitting on Sharepoint, on Box, on file servers, etc. Best case scenario, their employees can do keyword search against a subset of those documents. Chunk those docs, run them through an embedding process, store the embeddings in a vector store, let employees ask questions, do a similarity search against the vector store, pass the top results and the question to the LMM, get an actual answer back to present to the employee. This unlocks a ton of knowledge and can be a massive productivity booster.
Yes! Never in my career have I seen an organization do a good job of organizing institutional knowledge and making it easily available to employees. It'd be a huge benefit to many organizations to be able to ask questions of the collective text holdings.
There is definitely interesting and high-potential technology here. I do not think the current crop of "wrap ChatGPT in an API for XYZ business-case" startups will succeed - they will be total fails across the board. There is also an issue where anyone with an iota of experience or degree in something tangential to AI or ML can be the "genius" behind a new startup for funding - a telltale sign of bubble mentality to me.
If LLMs in their current form as human-replacement agents are cheaper versions of Fiver / mechanical turks, and we all know there are very limited, bottom-of-the-barrel use cases for those cheap labor technologies, then why would LLMs be a radical improvement? It's nonsensical.
About as killer as that twitter clone that was in the news for a minute after forcing people to use it and immediately losing 90% of the captive audience..
They have been losing users. Summer is here, school is out, the kids are back in reality for the moment and apparently when they aren't busy plagiarizing homework the interest is very limited.
It might not be a killer app for you, but it's a killer app for me as an engineer, and I'm definitely not alone.
To give a concrete example, I used it to write and test a VSCode extension that provides autocomplete and type-checking for environment variables in 46 programming languages[1]. It was the first VSCode extension I've written and I have zero experience in the majority of those languages. The whole project took a little over a week. Without ChatGPT, it would have taken months to add support for so many languages.
lol, now who’s demented? Everyone I know uses it. It even diagnosed a problem with my pool filter among dozens of other uses I find for it. I like it and use it more than Google and stack overflow now. Losing the school crowd for the summer isn’t the beginning of the end, it just means there’s a cohort that doesn’t need it as much for a few months while they’re out having fun instead of stuck inside writing papers and doing math problems.
It is great that everyone you know uses it but the traffic to ChatGPT is decreasing and has been for over two months now. If pointing this fact out makes me demented consider that perhaps you are emotionally invested in this new toy/brand.
I guess we can wait and see what kind of usage trends will emerge long term. My anecdotal evidence (which is not worth much, same as yours) is that many normies tried it a few times and it was a topic of conversation but is no longer mentioned much.
> the traffic to ChatGPT is decreasing and has been for over two months now
This seems entirely unsurprising, and isn’t by itself enough to support your general thesis.
Interacting with these LLMs was extremely novel for most people when the tech first dropped, and those earlier months were the peak of the viral growth/expansion into public awareness.
As the novelty dies down, it’s not surprising that there would be less traffic. Early on, I had all sorts of ridiculous conversations just to see what would happen. Now, I only use it when I have some task in mind.
That transition points to this being the opposite of a toy - after the fun dies down, the real work begins.
> My anecdotal evidence…is that many normies tried it a few times and it was a topic of conversation but is no longer mentioned much.
This has not been my experience at all. Most non-technical folks I know who are interested in ChatGPT see value in its ability to expand their technical capabilities/knowledge.
People who are motivated to learn will continue to use this to their advantage.
If some subset of that population has no such interest, this has no bearing on the usefulness of the tech, nor is it representative of the population.
And even if the “normie” population (this is pretty reductive…) abandons it entirely, this again says nothing about the value/utility of LLMs, and hints at a product/market fit issue.
We don’t say programming languages are useless because they’re not adopted by the general public.
> Early on, I had all sorts of ridiculous conversations just to see what would happen. [...] That transition points to this being the opposite of a toy - after the fun dies down, the real work begins.
The "intelligence" behind it is too unpredictable to be reliable for work, and using it for fun is about as amusing as emailing HR.
> The "intelligence" behind it is too unpredictable to be reliable for work
This highly depends on the kind of work you’re doing. It’s great as a starting point for exploratory learning, helpful for some coding tasks, and useful for summarizing text.
As I work on a writing project that benefits from all of these use cases, it’s a good tool.
Not so great if you’re trying to write legal briefs.
> using it for fun is about as amusing as emailing HR
All due respect, but you’re either doing it wrong, or you’ve encountered some hilarious HR departments.
Ask it to speak in cockney as an 18th century barker trying to convince you to buy a lame horse or to continue the conversation in brolish as though you were two surfer dudes sitting on the beach and then just ask it anything you want like “explain modern monetary theory”. If you enjoy fiction then get it to help world build a new setting and then act out a scene with you playing one character and it playing the rest.
To get it to stay in character use the custom instructions feature to set the requirements.
I personally use copilot every day and I love it. It reduces the amount of typing I have to do, gives me lots of good suggestions for solving simple problems and has made working with unfamiliar languages so much easier.
I'd say we're maybe half or two-thirds of the way down from the peak of inflated expectations toward the trough of disillusionment. Before long, I think maybe in the next three months or so, certainly around the time we hit the one year anniversary of chatgpt's release, we'll start seeing mainstream takes along the lines of "chatgpt and Bing's Sydney episode and such were good entertainment, but it's obvious in hindsight that it was a fad; nobody is posting funny screenshots of their conversations anymore, and all the pronouncements about a superhuman AGI apocalypse were obviously silly, it's clear chatgpt has failed and this whole thing was the same old hype-y SV pointlessness".
And at that point, we will have reached the trough of disillusionment. I think funding will be less readily available, and we'll start seeing some of the bevy of single-purpose LLM-based products start closing up shop.
But more quietly, others will be (already are) traversing up the slope of enlightenment. As others have mentioned, this is stuff like features in Microsoft's and Google's productivity products (including those for software engineering productivity like Github Copilot), and some subset of products and features elsewhere that turn out to be compelling in a sticky way.
I expect 2024 and 2025 to be the more interesting part of this hype cycle. I don't think we're on the verge of waking up in a world nobody recognizes in a small number of days or months, but I think in a few years we're going to have a bunch of useful tools that we didn't have a year ago, some of which are the obvious ones we've already seen, but improved, and others that are not obvious right now.
Not sure if this was insightful enough for you :) Apologies if not.
We're still on the exponential rise of the hype cycle. If capabilities appear to plateau - no GPT5/6 that are even more amazing, then the hype will not merely plateau but plummet. For now, anything seems possible.
As for a killer app, I'm another person for whom ChatGPT is it. I use GPT-4 something like Google, Wikipedia and Stack Overflow in one, but being very aware of the limitations. It feels a bit like circa 2000 when being good at googling things felt like a superpower. It doesn't do everything for you but can make you drastically more effective.
There's three levels of what's going on with AI at the moment, each with their own momentum and hype cycle: (1) the current generation of chat bots and image generators, which some of us would be using for the rest of our lives even with only minor refinements; (2) the prospect that new tools built on top of this and subsequent generations could remake the internet and how we interact with our gadgets; and (3) the prospect that the systems will keep getting smarter and smarter.
I wonder if language translation will be one of the "killer apps".
Especially if it can be done real-time and according to the context/level of the audience/listener. Even within the same language, translation from a more technical/expert level to a simplified summary helps education/communication/knowledge transfer significantly.
I mentioned the Stack Overflow Developer Survey once already today, but at the risk of sounding like a broken record, it has some data on this as well: https://survey.stackoverflow.co/2023/#ai
To save someone a click, around 44% of the respondents (some 39k out of 89k people) are already using "AI" solutions as a part of their workflow, another 25% (close to 23k people) are planning to do so soon.
The sentiment also seems mostly favorable, most aim to increase productivity or help themselves with learning and just generally knock out some more code, though there is a disconnect between what people want to use AI for (basically everything) and what they currently use it for (mostly just code).
There's also a section on the AI search tools in particular, about 83% of the respondents have at least had a look at ChatGPT, which is about as close to a killer app as you can probably get, even if it's cloud based SaaS: https://survey.stackoverflow.co/2023/#section-most-popular-t...
> Where are we in the hype cycle on this?
I'm not sure about the specifics here, but the trend feels about as significant as Docker and other container technologies more or less taking the industry by storm and changing a bunch of stuff around (to the point where most of my server software is containers).
That said, we're probably still somewhere in the early stages of the hype cycle for AI (the drawbacks like hallucination will really become apparent to many in the following years).
Honestly, the technology itself seems promising for select use cases and it's still nice that we have models that can be self hosted and somehow the software has gotten decent enough that you can play around with reasonably small models on your machine even without a GPU: https://blog.kronis.dev/tutorials/self-hosting-an-ai-llm-cha...
I'm cautiously optimistic about the current forms of LLM/AI, but fear that humanity will misuse the tech (as a cost cutting measure sometimes, without proper human review).
The killer app is ChatGPT. I'm not sure what you're expecting here, but it's been enormously useful while trying out new languages. For example, even if it's not 100% right, it has been a great help while working with nix, as I'm often ignorant to entire methods of solving a problem, and it's pretty good at suggesting the right method.
It's also super useful for things like "convert this fish shell snippet to bash" or "rewrite this Python class as a single function". It tends to really nail these sorts of grounded questions, and it legitimately saves me time.
I think 8 months is a little short for the utility of a new tech to be fully realized and utilized. I'm pretty sure there were still horses on the roads long after 8 months after the Model T first went on sale.
I can't tell if this is satire or not. It is so... Well, to be polite, sounds so much like an uninformed stock trader, that I find it hard to believe this isn't some sort of meta commentary on hacker News conversations.
There are plenty examples of where the technology can eventually lead in terms of entertainment, impact on society and news, knowledge work, and so on. It doesn't have to happen immediately. But to handwave The myriad articles about the subject away and just say " I don't believe any of it, what else you got" is a bit annoying.
Why don't you ask ChatGPT or bard? If there s a hype cycle, it is just starting.
The killer app is the LLM tech itself, and the victim seems to be the whole tech ecosystem. It disintermediates everyone who is gatekeeping information and connects end users with the information they want without the google, the SEO and without ads. Even if we are not right there today, the potential is there. This in itself is huge, since the whole ecosystem of SV is funded by ads.
I think it has shown the limitations of the Society of Mind hypothesis. Aggregating individuals equates to aggregating knowledge/experience, not intelligence. This is why hives and anthills do not really surpass their individuals intelligence. Ditto for human societies. In other words: composing LLMs using tools like langchain yield minor improvements over a single LLM instance.
Its not an "AI-killer-app" thats the real deal I think. Its that these AI tools (esp LLMs) are truly powerful tools in everyday work now. Automating stuff is a breeze now whereas it was much more involved before. Data classification, content/code creation, data transformation, ... typical jobs for software engineers boil down to this. Its only a prompt now you fire against an API. Automating tasks that used to require human clerks is now a few hours/days of creative coding and the tasks are gone.
A surprising amount of work can tolerate a percentage of errors in a non-deterministic way, even before considering that humans make even more errors that way usually. :-)
To be extremely cynical, all of this hype seems to be the mid-life crisis of gen-xers who grew up on the jetsons trying to bring the future they saw on tv as children to life, withour regard to the economic or technical feasibility.
The biggest impact on my life has been Code Interpreter. Much of my job as a CEO involves analyzing data to make strategic decisions - “which of several options is best based on the evidence?”
Code Interpreter lets me upload data in a multitude of formats and play with it without wasting hours futzing around in Google Sheets or pulling my hair out with Pandas confusion. I know basic statistics concepts and I studied engineering so I know about signals and systems. But putting that knowledge into practice using data analysis tools is time consuming. Code Interpreter automates the time consuming parts and lets me focus on the exploration, delivering insights I never even had access to before.
I don't think there's a "killer app" coming soon, but it'll be a thousand cuts. One awesome thing here, one slightly less awesome but still useful thing over there. Take Copilot. Cool stuff and one of the early products. Doesn't change the game in any fundamental way, but it does have its impact on the work of a substantial fraction of developers.
This is not unlike the computer revolution itself. When the PC came on the scene it was easy - for some types - to imagine The Future and they proclaimed it loudly. They forgot that the rest of the world take their time and regularly take decades to get used to very minor changes in their routine.
Since writing that, we’ve started using https://read.ai and other similar tools at my company, and we find them very helpful. I also have a friend working on a large content moderation team that will be using LLaMa 2 for screening comments. Lots of uses!
This concept of hype and decline has been happening for literally decades. Yet people don't realize it even when it's literally on the first google page for anything to do with AI.
The people spouting this AI nonsense seriously need to fuck off and read a book.
> This article needs to be updated. The reason given is: Add more information about post-2018 developments in artificial intelligence leading to the current AI boom.
I don't quite remember anything existing and comparable over the last few decades to LLMs like ChatGPT/Claude
We shipped a major feature in our core product atop the API. It's central to our onboarding experience for new users, and works quite well at the job of "teaching" them how to use the product more effectively. It isn't magic, but this has been an inflection point in capabilities.
An artist friend of mine with no programming knowledge used ChatGPT to produce a variety of cool visuals for a music gig, in Processing - spinning wireframes, bobbing cube grids, that sort of thing. They didn't even know they needed to use Processing at first - ChatGPT told them everything. They had an aesthetic in mind, and ChatGPT helped them deliver.
I don't want to make any real assertions but my intuitive reaction to this comment is _this person has no clue what they are talking about_. I would rather turn of syntax highlighting than turn off Copilot and I'd rather disable Google search rather than ChatGPT. And frankly, it's not even close, I use these tools "all the time for everything".
If you follow the Gartner model, there is usually a surge of high expectations right before a "trough of disillusionment" - but eventually the real applications do emerge. Humans are just impatient.
This got way out of hand as of by now and isn't about serving humanity as a whole anymore in big parts (!). This is some actors with the money and hardware trying to build their AI dream castles up on the shoulders of the rest and even don't care what the implications of their actions are. Money is regulating this business and is taking more away from us all in the long term than it pays in the short. I'm kinda glad we're developing backwards, because this changes are necessary for building a balanced future for all of us. Not just for a few eligible...
Question for HN: Where are we in the hype cycle on this?
We can run shitty clones slowly on Raspberry Pi's and your phone. The educational implementations demonstrate the basics in under a thousand lines of brisk C. Great. At some point you have to wonder... well, so what?
Not one killer app has emerged. I for one am eager to be all hip and open minded and pretend like I use LLMs all the time for everything and they are "the future" but novelty aside it seems like so far we have a demented clippy and some sophomoric arguments about alignment and wrong think.
It did generate a whole lot of breathless click-bait-y articles and gave people something to blab about. Ironically it also accelerated the value of that sort of gab and clicks towards zero.
As I am not a VC, politician, or opportunist, hand waving and telling me this is Frankenstein's monster about to come alive and therefore I need billions of dollars or "regulations" just makes folks sound like the crypto scammers.
Please HN, say something actually insightful, I beg you.