Hacker Newsnew | past | comments | ask | show | jobs | submit | ozgung's commentslogin

I think the biggest problem is that most tutorials use words to illustrate how the attention mechanism works. In reality, there are no word-associated tokens inside a Transformer. Tokens != word parts. An LLM does not perform language processing inside the Transformer blocks, and a Vision Transformer does not perform image processing. Words and pixels are only relevant at the input. I think this misunderstanding was a root cause of underestimating their capabilities.

As a side note, I find this capability of AI to mine social profiles quite disturbing. Automated profiling of social media accounts can be and is used with malicious intent. The amount of personal detail that can be recovered this way is shocking. It is possible to associate this information with a real identity, and it can be used to target and intimidate individuals.

It's literally what social media is for. People seem disturbed that when they put their private thoughts out on the internet their private thoughts end up out on the internet. I never understood that.

Chronologically, our main sources of information have been:

1. People around us

2. TV and newspapers

3. Random people on the internet and their SEO-optimized web pages

Books and experts have been less popular. LLMs are an improvement.


> LLMs are an improvement.

Unless somebody is using them to generate authoritative-sounding human-sounding text full of factoids and half-truths in support of a particular view.

Then it becomes about who can afford more LLMs and more IPs to look like individual users.


Interesting point, actually - LLMs are a return to curated information. In some ways. In others, they tell everyone what they want to hear.

What is a good way of connecting Obsidian vault to AI?

I agree, classic innovator's dilemma. It's a new business enterprise, has nothing to do with Meta's existing business or products. They can't be under the same roof and mush have independent goals.

Great post and I think this extends to machine learning names, although not that severe. Maybe it all started with Adam. When I say “I used Adam for optimization” this means I used a random opaque thing for optimization. If I say “I used an ADAptive Moment estimation based optimizer” it becomes more transparent. Using human names or random nouns has been a trend. Lora, Sora, Dora, Bert, Bart, Robert, Roberta, Dall-e, Dino, Sam… With varying capitalization for each letter. Even the Transformer. What does it transform exactly? But it gets worse. Here is a list of architectures that may replace Transformers [0]: Linformer, Longformer, Reformer, Performer, Griffin, BigBird, Mamba, Jamba... What’s going on?

[0]https://huggingface.co/blog/ProCreations/transformers-are-ge...


I think doing your research using search engine/AI/books and paraphrasing your findings is always valuable. And you should cite your resources when you do so, eg. “ChatGPT says that…”

> 1. If I wanted to run a web search, I would have done so

Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.

2. People behave as if they believe AI results are authoritative, which they are not

AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.

Any strict rule/ban would be very premature and shortsighted at this point.


For Flash vs iPhone case, it was indeed mostly politics. People were using Flash and other plugins in websites because there were no other alternative, say to add a video player or an animation. iPhone was released in 2007 and app store in 2008. iPhone and iPad did not support then popular Flash in their browsers. Web experience was limited and broken. HTML5 was first announced in 2008 but would be under development for many years. Not standardized yet and browser support was limited. Web apps were not a thing without Flash. Only alternative for the users was the App Store, the ultimate walled garden. There were native apps for everything, even for the simplest things. Flash ecosystem was the biggest competitor and threat for the App Store at that moment. Finally in 2010 Steve Jobs addressed the Flash issue and openly declared they will never support it. iPhone users stopped complaining and in 2011 Adobe stopped the development of mobile plugins.

Adobe was in a unique position to dominate the apps era, but they failed spectacularly. They could have implemented payment/monetization options for their ecosystem, to build their own walled garden. Plugins were slow but this was mostly due to hardware at the time. This changed rapidly in the following years, but without control of the hardware, they had already lost the market.


That is almost entirely backwards.

> For Flash vs iPhone case, it was indeed mostly politics.

It was politics in the sense that Flash was one of the worst cause of instability in Safari on OS X, and was terrible at managing performance and a big draw on battery life, all of which were deal breakers on the iPhone. This is fairly well documented.

> iPhone was released in 2007 and app store in 2008. iPhone and iPad did not support then popular Flash in their browsers.

There were very good reasons for that.

> Web apps were not a thing without Flash.

That is entirely, demonstrably false. There were plenty of web apps, and they were actually the recommended (and indeed the only one) way of getting apps onto iPhones before they scrambled to release the App Store.

> Flash ecosystem was the biggest competitor and threat for the App Store at that moment.

How could it be a competitor if it was not supported?

> iPhone users stopped complaining

It was not iPhones users who were complaining. It was Android users explaining us how prehistoric iPhones were for not supporting Flash. We were perfectly happy with our apps.

> and in 2011 Adobe stopped the development of mobile plugins.

Yeah. Without ever leaving beta status. Because it was unstable, had terrible performances, and drained batteries. Just what Jobs claimed as reasons not to support it.

> Adobe was in a unique position to dominate the apps era, but they failed spectacularly.

That much is true.

> Plugins were slow but this was mostly due to hardware at the time.

Then, how could native apps have much better performance on the same hardware, on both Android and iOS?


> Then, how could native apps have much better performance on the same hardware, on both Android and iOS?

Web engines were honestly not great back then. WebKit was ok but JavaScriptCore was very slow, and of course that’s what iOS, Android, and BB10 were all running on that slow hardware. I have distinct (bad) memories that even “GPU-accelerated” CSS animations were barely 15fps, while native apps reliably got 60fps unless they really messed up. That’s on top of the infamous 300ms issue, where every tap took 300ms to fire off because it was waiting to see if you were trying to double-tap.

So I really think some of the blame is still shared with Apple, although it’s hard to say if that’s because of any malicious intent to prop up the App Store, or just because they were under pressure to build out the iOS platform that there wasn’t enough time to optimise. I suspect it was both.


I will never forget the hubub around the discovery that everything you typed on android went to a root shell. "What should I do?"... "reboot" phone reboots


The best think Jobs ever did for tech was forcing the whole industry to advance HTML to where it could replace Flash, and killing the market for proprietary browser content plugins. I don’t want to imagine what the web would be like today if Flash had won, and the whole web was a loader for one closed-source, junky plugin.


Tom and Jerry's friendship makes more sense now.


That seems like a valid problem that was also mentioned in the podcast. 50 copies of Ilya, Dave or Einstein will have diminishing returns. I think the proposed solution is ongoing training and making them individuals. MS Dave will be a different individual than Dave.gov. But then why don't we just train humans in the first place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: