Hacker Newsnew | past | comments | ask | show | jobs | submit | simondotau's commentslogin

I haven’t heard anyone make any of those claims in the way you’ve made them.

Conversely, the inference you leave readers with from your description of Apple’s contribution to USB-C and CPU design is just as untrue.


>>your description of Apple’s contribution to USB-C and CPU design is just as untrue.

Not my description. Or you mean Gruber's description of Apple's contribution.

His post on USB-C, CPU Design and AirPod is still Up on his site.

Those who actually worked on USB-C before it was even known also posted on HN.


I haven’t heard Gruber make any of those claims in the way you’ve made them.

Conversely, the inference you leave readers with from your description of Apple’s contribution to USB-C and CPU design is just as untrue.


Presumably if it knows it needs to perform multiple searches in order to gather information (e.g. searching for redundant implementations of an algorithm, plus calls to the codebase's canonical implementation) it should be able to run those searches in parallel grep calls.

I'm trying to figure that one out.

LLMs are inherently single-threaded in how they ingest and produce info. So, as far as I can gather from the description, either it spawns sub-agents, or it has a tool dedicated for the job.


The web platform doesn’t need to move this fast. Google is, often unilaterally, pushing new features and declaring them standards. In my opinion, the web should not be changing so fast that a truly open source community project couldn’t keep up. I don’t like how the web has become reliant on the largesse of billion dollar corporations.

I recognise that this is a controversial take, but in my opinion what Google is doing is a variant of “embrace and extend”. Traditionally, this meant proprietary extensions (e.g. VBScript) but I think this a subtle variant with similar consequences.


I know it's fashionable to forcefully shove the same pet peeves about Chromium into any topic even loosely related, but here I'm talking about Safari webcompat fixes, bug fixes, and improvements having very long delays between being written and landing in customers' hands. I would make the same argument if Chrome never existed. Thank you for presenting the 10,001st reissue of this "controversial take".

The behaviour of entities that WebKit is ostensibly told to be compatible with isn't a "loosely related" topic, it's precisely on-point. It's certainly no less on-point than nebulous criticisms of Apple for assumed NIH syndrome or marketing priorities. You criticise Apple for not having a rapid release schedule; I am criticising the very notion of rapid release schedules (other than security patches).

The web platform doesn't need to move so fast.


How can you defend Safari rendering broken sites for long periods due to lack of frequent updates as a good thing?

The ever current adage of distortion field applies here.

Just like Safari not having webgpu was touted as a feature and now that it has support, webgpu suddenly turned into a feature. Apple can do no wrong to some. Whatever they do is a feature. And if they don't do, it's a feature too.


I agree that numerous companies inspire occasional weird reflexive defences from their most enthusiastic supporters. Thankfully, bad arguments have no transitive value.

Implying otherwise is itself a bad argument.

It is true that Safari sometimes lagged in ways that are legitimately open to criticism. There are instances where Safari had incomplete or broken feature implementations. But many claims of “broken sites” are really just evidence of lazy developers failing to test beyond Chrome or to implement graceful fallback. Relying on bleeding-edge Chromium features before they've been broadly adopted by browsers is, IMHO, a infatuation with novelty over durability. It's also, IMHO, a callous disregard for the open web platform in favour of The Chrome Platform. Web developers are free to do whatever they like, but it's misleading to blame browsers for the bad choices and/or laziness of some web developers.


> But many claims of “broken sites” are really just evidence of lazy developers failing to test beyond Chrome or to implement graceful fallback.

Correct. People test Chrome first and often only. That'll never change because people are lazy and you have a humongously long tail of websites with varying levels of giving a shit and no central authority that can enforce any standards. Even if another browser takes other, they'll only test that one.

The solution is formal tests and the wpt.fyi project. It gives a path to perfectly compatible implementations of agreed-upon standards, and a future where *the only* differences between browsers will be deliberate (e.g. WebMIDI). Brilliant.

That's why I wish the gap between Safari TP's wpt.fyi score and Safari stable's score was shorter. Simple!


Why do you keep conflating bug fixes with new platform features?

Because such bugs were predominantly associated with then-new platform features.

As a web developer myself, I appreciate the frustration with Safari's flexbox bugs of a decade ago and viewport bugs more recently. I also remember being endlessly frustrated by Chrome bugs too, like maddening scroll anchoring behaviours, subpixel rounding inconsistencies, and position:fixed bugs which were broken for so long than the bugs became the de-facto standard which other browsers had to implement. All browsers have bugs. To suggest that Safari was uniquely bad is to view history with Chrome-tinted glasses.


I'm not saying that other browsers don't have bugs. This thread is about how Safari doesn't fix them for a long time because it ships slowly.

You are assuming that Safari didn’t fix bugs for a long time because it “ships slowly.” Maybe some bugs are just complicated and take time to fix. It took Google years to fix the bugs I mentioned earlier (and many others) despite having the largest budget of any browser project and a VERY rapid release cadence.

No a lot of them are pretty straightforward, this is why I’m upset. I’m talking about, like, “SVGs with this feature don’t render correctly due to an oversight in size calculation” not “can you please implement WebGPU in the next release cycle”.

> How can you defend Safari rendering broken sites for long periods due to lack of frequent updates as a good thing?

That hasn't been true for a few years now.

Even now, when a site breaks in Safari, more often than not, it's because that particular site is using a Chrome-only feature that hasn't shipped in Safari or Firefox yet. These developers need to be reminded that progressive enhancement is a thing.

There are web developers who only test their sites on Chrome, which makes no sense, given mobile Safari has around 50% marketshare in the US [1] and about 21% globally [2].

> Just like Safari not having webgpu was touted as a feature and now that it has support, webgpu suddenly turned into a feature.

I must have missed this one, but anyone paying attention would have noticed WebGPU had been available in Safari (behind a flag) long before it became official; it was always on track to becoming a real feature.

[1]: https://gs.statcounter.com/browser-market-share/mobile/unite...

[2]: https://gs.statcounter.com/browser-market-share/mobile/world...


"Google learned from Microsoft’s mistakes and follows a novel embrace, extend, and extinguish strategy by breaking the web and stomping on the bits. Who cares if it breaks as long as we go forward." https://www.quirksmode.org/blog/archives/2021/08/breaking_th...

That's a good article. Thanks for surfacing.

VBScript is a word I hadn't heard in quite a while! Brings back memories of editing 5k line .asp files to find an if statement and then a 1000 lines of html and such. Sadly, I dont' think web development is actual better 20+ years later, just different...

The web platform on your device needs to be locked to a specific version because the OS stopped being updated. Once the OS stops being updated, you're supposed to buy a new device.

You shouldn't be allowed to use an old device with an updated browser, especially not a browser from a 3rd party, because that doesn't help Apple sell more iPads.


You're seriously arguing that Sony and Microsoft need or deserve special treatment? That's patently insane. The Government shouldn't be in the business of assessing the minutia of competing business models. Whatever you think is right for Apple, it's good enough for absolutely everyone.

I'm mostly saying

1. No one cares about making an app store for the Ps5 when in 10 years they'll need to port it to the PS6

2. Consoles are already stagnating and it won't take much to also push Sony out. So basically we'd reduce competition back down to Nintendo to enable a feature no one cares about.

But sure. If you want to look at it in that lens: I want apple and Google to get "special punishments" because they've long proven to be a monopoly and practice anti-competitive behaviors. We can deal with other monopolies as we go along.


The size and impact on society of the Android/iOS platforms are way way way higher than that of Sony's and Nintendo's platforms. Nobody needs a console, but basically everybody needs a phone running Android or iOS.

Yes, basically everybody needs a phone running Android or iOS. But they don’t need Fortnite on their phone.

If the government wanted to proscribe store rates for productivity apps only, I’d be more receptive. But it’s ridiculous for the government to proscribe rates for game sales, especially if they aren’t doing the same for all game stores.


In the increasingly rare instances where Tesla's solution is making mistakes, it's pretty much never to do with a failure of spatial awareness (sensing) but rather a failure of path planning (decision-making).

The only thing LIDAR can do sense depth, and if it turns out sensing depth using cameras is a solved problem, adding LIDAR doesn't help. It can't read road signs. It can't read road lines. It can't tell if a traffic light is red or green. And it certainly doesn't improve predictions of human drivers.


Which begs me the question why Tesla took so long to get here? It's only since v12 it starting to look bearable for supervised use.

The only answer I see is their goal to create global model that works in every part of the world vs single city which is vastly more difficult. After all most drivers really only know how to drive well in their own town and make a lot of mistakes when driving somewhere else.


It was only about 2 years ago that they switched from hard coded logic to machine learning (video in, car control out), and this was the beginning of their final path they are committed to now. (building out manufacturing for Cybercab while still finalizing the FSD software is a pretty insane risk that no other company would take)

That’s the switch for controls, the machine vision was nn from the start.

Path planning (decision-making) is by far the most complicated part of self-driving. Waymo vehicles were making plenty of comically stupid mistakes early on, because having sufficient spatial accuracy was never the truly hard part.

Sensing depth is pretty important though. Especially in scenarios where vision fails, radar for example works perfectly fine in the thickest of fog.


In "scenarios where vision fails" the car should not be driving. Period. End of story. It doesn't matter how good radar is in fog, because radar alone is not enough.

Too bad conditions can change instantly. You can't stop the car at an alpine tunnel exit just because there's heavy fog on the other side of the mountain.

If the fog is thick enough that you literally can't see the road, you absolutely can and should stop. Most of the time there's still some visibility through fog, and so your speed should be appropriate to the conditions. As the saying goes, "don't drive faster than your headlights."

> The only thing LIDAR can do sense depth

This is absolutely false. LiDAR is used heavily in object detection. There’s plenty of literature on this. Here’s a few from Waymo:

https://waymo.com/research/streaming-object-detection-for-3-...

https://waymo.com/research/lef-late-to-early-temporal-fusion...

https://waymo.com/research/3d-human-keypoints-estimation-fro...

In fact, LiDAR is a key component for detecting pedestrian keypoints and pose estimation. See https://waymo.com/blog/2022/02/utilizing-key-point-and-pose-...

Here’s an actual example of LiDAR picking up people in the dark well before cameras: https://www.reddit.com/r/waymo/s/U8eq8BEaGA

Not to mention they’re also highly critical for simulation.

> It can't read road signs. It can't read road lines.

Also false. Here’s Waymo’s 5th-gen LiDAR raw point clouds that can even read a logo on a semi truck: https://youtube.com/watch?v=COgEQuqTAug&t=11600s

It seems you’re misinformed about how this sensor is used. The point clouds (plus camera and radar data) are all fed to the models for detection. That makes their detectors much more robust in different lighting and weather conditions than cameras alone.


I think "sensing depth" and "object detection" are the same things in this debate though

It's just "sensing depth" the same way cameras provide just "pixels". A fused cameras+radars+lidar input provides more robust coverage in a variety of conditions.

You know it would be even more robust under even more conditions? Putting 80 cameras and 20 LIDAR sensors on the car. Also a dozen infrared heat sensors, a spectrophotometer, and a Doppler radar. More is surely always better. Waymo should do that.

Maybe Tesla should reduce their camera count from 8 to 2 and put them on a swivel like human eyes. Less is surely always better.

I can also make “clever” arguments that are useless.


Remarkable. You managed to both misunderstand my point and, in drafting your witty riposte, accidentally understand it and adopt it as your own. More isn't objectively better, less isn't objectively better. There's only different strategies and actual real world outcomes.

> More isn't objectively better, less isn't objectively better.

Great, you finally got there. All it took was one round of correcting misinformation about LiDAR and another round of completely useless back-and-forth about sensor count.

The words you’re looking for are necessary and sufficient. Cameras are necessary, but not sufficient.

> There's only different strategies and actual real world outcomes.

Thanks for making my point. Actual real world outcomes are exactly what matter: 125M+ fully autonomous miles versus 0 fully autonomous miles.


Oh I’m sorry, I didn’t realise you think you’re in a battle of fanboy talking points. Never mind. Not interested.

Highly ironic considering you started this comment chain with a bunch of fanboy talking points and misinformation. Clearly, you’re not interested in being factual. Bye.

It's not real L3, it's marketing department L3. Two years after launch it's still only supported in two US states. Now that Mercedes got their headline, it's effectively abandonware.

If it was real L3, Drive Pilot would be considered the vehicle operator for legal purposes. Mercedes would take full responsibility for any driving infringements or collisions that occur during its use. In reality, Mercedes cannot indemnify you from driving infringements, and for collisions they only promise to cover "insurance costs" which probably doesn't include any downstream reputational consequences of making an insurance claim.


I've only rented Teslas but I can see how most people would consider CP/AA to be unnecessary given the quality of their integrated software. But for me, the two things Tesla can't do (and CP/AA can) is

1. Waze;

2. My preferred third-party podcasting app.


The cost of the entertainment cluster comes from the integration work. If it was just a backup camera and a carplay/aa head unit and absolutely nothing else then maybe it could be OEMed from the same companies who sell aftermarket systems for $100 or so.


> Electric cars are supposed to be simple.

The only part an EV doesn't have is the engine and gearbox. Admittedly, these are pretty major components, but it's a technology mature enough to be extremely reliable if the manufacturer cares to make it so.

But what an EV has instead is a massive battery, charging electronics, a DC-DC converter keeping the 12V battery charged, and various electric motors and actuators for the air conditioning and coolant loops. These are significant more reliable than oily engines in lab environments, but the automotive environment tests the mettle of seemingly resilient components.


A typical ICE car will consume at least 500 gallons of petrol (gasoline) per 1 gallon of tire tread worn. The environmental impact per volume of tire is certainly greater, but it's not remotely five hundred times greater.

I'm not saying we should disregard the issue of tire pollution. But if it was as serious as you suggest, it would be making more headlines than it is.


Why wouldn’t it be 500 times greater? Gasoline is combusted for energy, converting most of it into mostly harmless byproducts; tire tread is just released as is.


The best evidence that tyre tread is significantly less consequential than gasoline consumption is that such criticisms overwhelmingly arise in discussions of electric cars.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: