Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
M1 Macs: Truth and Truthiness (daringfireball.net)
491 points by goranmoomin on Dec 3, 2020 | hide | past | favorite | 860 comments


The author is bashing another author for not doing measurements, yet himself declares that m1 beats all the other CPUs, without providing any measurements.

With regards to the M1, I'm grateful for a good competition. As this will likely push premium laptop manufactures (Razer, Dell) towards ryzen CPUs, as opposed to using them in cheaper models. Loosening the grip that intel has on them.

With Zen3 on mobile, giving a similar 20% boost as it did on desktop, plus a jump from 7nm to to 5m, giving another 20%, next year might be a very good year for Ryzen laptops. Looking forward to a Ryzen 6000 XPS 13.

The other thing I'm grateful for M1 is that will most likely push the laptop manufactures to pay attention to thermals and noise.

The unpleasant thing, is that most likely Apple will be a process node ahead, with them most likely getting TSMC 3nm in 2022, and everyone else a year later. With a competitive year in-between.

I'm looking forward to next year, to upgrading to a Ryzen 6000(5nm) cpu, if it's good. And I'm looking forward for the more beefy apple arm chips and comparison between them. Also, maybe someone would compare a Ryzen 6000 hackintosh with an arm mac while it will be possible.

Also looking forward at Intel's response, be it a partnership with TSMC, or fixing their nodes, and, fingers crossed, a completely new risc-v cpu.

Anyway, interesting stuff.


Yeah John Gruber has really become a shill over the last few years. His article reads like an Apple keynote transcript, everything is magical and amazing, a marvel and mindbogglingly great. No wonder they throw him so many exclusives ;)

Yes M1 is great. But it's a closed system (no booting Linux or Windows at this time) and even under macOS you're having even less access to the system's internals than before. Also, Apple performs many tricks to get such long battery life, such as putting apps to sleep when you're not looking at them (app nap). I find this extremely irritating when working with my Mac but I can't turn it off completely even though I have a mini and thus battery life is not a factor at all. My work MacBook spends it entire life docked with the screen closed too.

M1 is a solid platform for sure, better for laptops than intel/amd's offerings. I agree. But I'm getting enough of Apple's (and Gruber's) hypey marketing. It's not magic, just good engineering. I stopped watching their keynotes for the same reason.


> John Gruber has really become a shill over the last few years.

I think it's important to read Gruber and understand he's an Apple person. Not a shill, as much as his world is all Apple. Sure, he checks out an Android phone every couple of years, but for the most part he doesn't care what's going on outside the Apple ecosystem. And, that's fine. As a reader of Gruber, I read for his takes about Apple.


That's fair enough, I have occasionally read his blog over the years (mainly linked from other sites such as MacRumors). That's also how I saw how many scoops he's been given by them (because if they were actual leaks I'm sure he would have been cut off by now, knowing Apple's secrecy). But I don't remember him being this lyrical about Apple as he is here.

However I used to be an Apple fan and Mac used to be my daily driver (even though I've always been an "every OS at the same time" guy). But my enthusiasm has reduced a lot lately due to the higher lock-in, and 'iPadification' of the Mac. Especially due to stronger ties with Apple services: As an "every OS" guy this doesn't work for me obviously. Basically it is getting decidedly harder to use Apple stuff well if you don't want to do things exactly "the Apple way". The Mac didn't use to be like that. It was more like a general unixy OS that wasn't so opinionated.

So, as my opinion has changed it is possible his admiration for Apple is irking me more than before :) Sorry if I offended by calling him a shill. But he adopts Apple's flowery language filled with superlatives as if their products are some kind of magic inconceivable to their competitors. That rubs me the wrong way as I do consider the world outside Apple which is not bad at all.


What lockdown is grating to you? I grew up on BSD multi-user systems and Apple things still look the same to me: crusty BSD with ancient coreutils in the terminal, nice rounded corners in the GUI.

What is the iPad-ification? Just the code signing stuff? I guess you can't write to /System anymore these days? And there's something about app notarization, but it doesn't really affect any of my uses.


Almost all GNU user land tools and libraries work on BSD and you can install them fairly easily with brew. Unless you are doing Linux kernel development, there is little you’ll miss out on.


I'm typing this on FreeBSD. If the most interesting part of a Mac to you is the BSD layer ... Well, I doubt that. People like Mac for the UI and Cocoa stuff, plus hardware and integration thereof.

The way you've substituted "BSD" for macOS really doesn't make sense. Yes, there is BSD code in MacOS, quite a bit of it in fact. But they're not the same thing.


You are right that people that buy a Mac don’t get it for the FreeBSD kernel. However, most people that complain about not being able to run Linux on a M1 can be just as productive with the alternatives on macOS.

For 99% of other things that run in a terminal, you can get it to work in FreeBSD/macOS. Also, for Python folks, there is anaconda distribution that is agnostic to the underlying OS. Ditto for node or ruby development. Also Gui Emacs and vi have decent ports to macOS.

That said, I guess if you are doing some custom server side development that really needs Linux, you will have a hard time with M1 - virtualization and emulation don’t mix and your code might not be portable enough.


Again as a frequent FreeBSD user, "FreeBSD/macOS" is not really a thing. There is historical shared code but they are different beasts. Sometimes there is substantial porting involved to get things from one to the other and they are not really the same entity. Additionally, the BSD pieces of macOS are frequently out of date and a hodgepodge of components.

Personally, I have very frequently had the experience that Unixy stuff that "just works" on *BSD or Linux does not work on Mac straight away, or breaks with a new OS release, etc. It takes effort for the homebrew folks or the macports folks or whatever to keep stuff working.


It's not a FreeBSD kernel, it's a Mach kernel. The userspace is based on FreeBSD though.


Mostly agree, and upvoted, however I think it’s important to acknowledge (in thread about M1 Macs) that full brew support will be a large undertaking for M1.


It seems to be going well so far. I would say it's around 70% of the way there[0].

What's left is a few major runtimes (OpenJDK, golang is waiting for 1.16 to be released). Many of the packages that are not properly supported either require golang [4], rust [2], mono [3] or java to support arm64.

It's impressive the amount of work Apple has done behind the scenes get M1 support in many open source packages, [1] for example.

  [0] https://github.com/Homebrew/brew/issues/7857
  [1] https://github.com/Unity-Technologies/mono/commit/3f2278e081484e6616ee5892895f999610453ca9
  [2] https://github.com/rust-lang/rust/issues/73908
  [3] https://github.com/mono/mono/pull/20014
  [4] https://github.com/golang/go/issues/38485


Among other problems, Safari keychain backups don’t work unless you store them in iCloud (and there’s no warning about this):

https://news.ycombinator.com/item?id=24626562


This does appear to be sorta documented:

Note: You can’t copy passwords stored in your Local Items or iCloud Keychain. To transfer these keychain items to another computer, set up iCloud Keychain on the other computer using your iCloud user name (normally your Apple ID) and password. You can manually copy keychains other than Local Items or iCloud Keychains to another Mac using the steps below.

https://support.apple.com/guide/keychain-access/copy-keychai...


Ah, good find! It appears that the Big Sur documentation notes this right up top, whereas for the earlier OS versions it's relegated to a note at the bottom.

Either way, I'm not sure this counts as a warning! It rather reminds me of:

As you are probably aware, plans for the development of the outlying regions of the galaxy invoke the building of a hyperspace express route through your star system. And your planet is one of those scheduled for demolition.

(Shouts of terror emit around the globe.)

There's no point acting all surprised about it; the plans and demolition orders have been on display at your local planning department in Alpha Centauri for fifty of your earth years. If you can't be bothered to take an interest in local affairs, that's your own lookout.

Most people setting up their keychain without iCloud will never see this, and there's not even an explicit mention here of the issue with backups (though one could sort of infer that).


Agree. He is an Apple guy. He delights when Apple does cool stuff. Sometimes he takes Apple’s side on debatable subjects. But he has no issue calling Apple out when he thinks they are doing wrong. It’s unfair to call him a shill.


Agreed, shill isn't the right word to use. Fan, cultist, or zealot would be more in line (depending on the desired impact and tone of the expression)


I think "cultist" or "zealot" applies more to the FOSS/Stallman crowd.


There are multiple cults.


Live by the sword, die by the sword. Fully-committed shills are not shills.

Edit to clarify: I meant that a shill with skin in the game is less of a shill. Of course a shill can be "fully commited" to shilling, but if they are fully committed to the thing which they are promoting, then they are not a shill in my eyes.


> Yeah John Gruber has really become a shill over the last few years

I remember reading sentiments to that effect since 2004. I doubt he's getting paid directly, so he could only be a shill in a very loose sense of the word. He does get paid in review units and access, though.

I think he's maybe overly enthusiastic about the good stuff, and skews toward the positive when matters are questionable but he has called out Apple's corporate malfeasance and pointed out technical shortcomings in the past. He's biased but that's a one man blog for a specific audience.


Personally I do think that very access to inside information makes him at least partially an Apple-controlled outlet. He's released some serious scoops about upcoming products, and his verification of a rumour is basically taken as a confirmation in the Mac rumours scene.

It's of course possible he has some kind of access that Apple don't control (like an employee leaking information), but given their extreme focus on secrecy, I'm sure that this would have long been shut down if it wasn't fully authorised.

That kind of access does not come without ties, IMO. Though I agree I was coming on too strong when I said shill.


I'm sure leakers go to him to give him stuff, just like they'd go to any other press source.


> he has called out Apple's corporate malfeasance and pointed out technical shortcomings in the past.

He does this all the time. In his last podcast or two he has complained about Apples’s keyboards, hot laptops, short charger cords, Siri and the Touch Bar. I’m sure there are others but I haven’t gone back.


FYI, they have to ship back the review units. Of course there is a financial aspect to the fact he gets the unit first and can publish the review and get the eyeballs, but it's just something to know about Apple, they don't give anything.


I get that, I didn't mean to imply he gets to keep them but as you say, just getting them a week in advance is a benefit.


> Yes M1 is great. But it's a closed system

It doesn't matter. Sure, that might mean that an Apple system is not for you.

But the crux of the matter is: Apple did something truly revolutionary. How is the rest of the market going to respond? Specifically, how are Intel, AMD and Microsoft going to respond? Microsoft, at least, has Windows running on ARM, but it doesn't have Rosetta. Intel and AMD have nothing remotely comparable.


> Apple did something truly revolutionary

This is hyperbole. Apple has released a CPU that has very good perf/watt compared to it's current competition. That's it. This isn't revolutionary, it's just another incremental step in CPU improvements.

Can we please give Apple praise for competing well in the hardware market without fully drinking the kool aid and parroting their marketing.


Sounds revolutionary to me. The new M1 is faster and MUCH lower power than an i9 based system that would ramp up fans at even the most minor of workloads like say upgrading iterm2 from version n to version n+1.

Suddenly $700 mac mini's can do crazy things like edit 4k video and the m1 laptops can handle being plugged in to wall power much less often than the i9 systems.

Likely within 12 months there's likely to be systems with twice as many cores bringing unheard of levels of performance to reasonable price points.

Keep in mind Intel hasn't shipped something significantly faster than the previous generation in 5 years.


I would like the people downvoting you to explain why they think it's truly revolutionary. I just went back to reread the Tom's Hardware article on the chip, thinking I must have missed something, and I'm not seeing it.

Full credit to Apple for incrementally improving their A-series processors over the last decade until they're good enough for laptops. But I'm not seeing that this makes anything fundamentally different. Just compare the A14 with the M1, for example: https://www.cpu-monkey.com/en/compare_cpu-apple_m1-1804-vs-a...


>I would like the people downvoting you to explain why they think it's truly revolutionary.

For one, it's the first PC chip with these kind of thermals and power combo.

Second, it brings many new design decisions, from unified memory, to the SoC design, various coprocessors, etc, that have not been available in the form of laptop/desktop PCs. The last of this kind in the mass market would be something like the Amiga, which is still a very different design.

Third, it's hella fast and many different things, both given its power draw, but also irrespective of it.

Fourth, it makes ARM for PC computing mainstream suddenly, after decades where it was relegated to the mobile and hobbyist/specialist (rapsberry, etc) space.

If that's not a revolutionary change, I don't know what you expected. A CPU made from flour with alien technology that's 1000 times faster and uses quantum bits?

Yeah, they didn't make that.


Like the iPhone and iPod, it's a revolutionary entrant from the consumer point of view in terms of user experience. And like the iPhone and iPod, from a technical point of view, it's merely a very competent iteration down a natural evolutionary pathway, in a preexisting product type.


And like the iPhone and iPod it has very real innovations (in engineering) that people always underplay because "No wireless, less space than a Nomad, lame!", or "we had smartphones before, I had some Ericsson/Nokia/MS gizmo, so what's evolutionary about the iPhone?".


Ted Nelson: "It's possible to insist that every change is merely a small difference in degree if you do not want to see a change in kind"


You're right. At the end of the day, they're technically evolutionary, not revolutionary, which was the central conceit of this subthread. And there's nothing wrong with that. Technical evolution can lead to consumer revolutions.


I disagree. The iPhone was revolutionary in that it made smartphones and phone apps accessible to the mass market, unleashing a wave of change in how people live, work, and play.

M1 Macs will be used basically like the previous Macs, but somewhat faster and somewhat longer without plugging in. So I don't think this qualifies as a revolutionary change.


I think the M1 Macs are a precursor for a truly revolutionary (for consumers) Apple ARM computer. Whether that will be realized with the M1X/M2 next year or further on, it remains to be seen.


Ok. I'm glad we now agree that this is not revolutionary.

What's the revolution you are hoping for with future generations of the processor?


These all strike me as evolutionary changes. People will be doing about the same things as before. They will do them a bit faster, and not have to plug in as often.

I expect revolutionary changes to overturn an existing order. E.g., the Mac was revolutionary. Smartphones were revolutionary. This is a solid and impressive evolutionary change. Kudos to them, of course. But when actual revolutions happen, I would like to have a word handy to acknowledge that.


Smartphones can be just as easily trivialized into being something which lets us "do the same things as before, but faster and with less plugs".

I think it is wrong to say there is a clear difference between these kind of changes. We won't really know how "revolutionary" M1 was until we see the landscape of the market in the future, but I think the simple fact of dropping x86 alone is enough to enable some major new ways of looking at desktop computing.


They cannot in fact be that easily trivialized. You'd welcome to try, though if you think you can demonstrate the point.

If you really believe that non-x86 laptops are what makes for a revolution, then the credit here doesn't belong to Apple. For example: https://www.theverge.com/2020/1/6/21050758/lenovo-yoga-5g-sn...

or: https://www.cnn.com/2020/03/09/cnn-underscored/galaxy-book-s...

But personally, I'd say that those are an interesting step, but not particularly revolutionary.


The credit doesn't really belong to Apple for big screen primarily-touch based communication devices either, except that they popularized them.


I think that's incorrect. As a user of a pre-iPhone smartphone, I don't think Apple should get credit for inventing the smartphone, but I do think the iPhone was revolutionary. They didn't just popularize it; they created a true consumer-grade device via relentless user-focused polish. It's the same deal with the original Mac, which was also revolutionary.

But here, there's nothing radically different about M1 Macs that will open up vast new markets or notably change the daily lives of a purchaser.


It's not necessarily revolutionary, but it's not just a small evolutionary step. But either way it's just pedantic.


I'm glad we both agree that revolution is not the right word. I think precision in language is something that is generally valued at HN, especially when the lack of precision is marketing hype.


Exactly how is it much better? It's not the first chip with a 15-25W TDP. When compared to other chips with such a TDP, it's incrementally faster in single core and it trades blows in multicore. It does this at a process node ahead.

For very light loads, it does have a slightly better power performance than existing low power PC processors, but not by that much. A Ryzen LPP core is somewhere between the big and the small cores of the M1 in power performance in such cases.

It's not the first PC chip to be an SOC, or to have unified memory, or to have many coprocessors. Literally all of those were done simultaneously, in chips of comparable performance, at the same power.

It's also been a decade that we know that instruction sets don't matter. All modern PC processors use a different instruction set and translate from x86. To switch them to ARM would be a matter of adding a few extensions to expose all the functionality, and change the instruction translation stage a bit.


>Exactly how is it much better? It's not the first chip with a 15-25W TDP. When compared to other chips with such a TDP, it's incrementally faster in single core and it trades blows in multicore. It does this at a process node ahead.

Being a process node ahead is already "much better", and IS a feat. Where are the process node ahead chips from the competitors?

On top of that, it's Apple's own SoC design, it has its own GPU design and co-processors, tons of work into making speed optimizations all around (from the buses to memory handling), and came with a new OS release, a port to the new architecture, a bridge layer for iOS apps, and a perfectly capable x86 translation layer.

Rosetta2 alone would be difficult to pull all for most of the industry (Microsoft failed at the same, and their ARM laptops are completely subpar to the M1 machines as well in many levels, despite costing the same or more).

>It's not the first PC chip to be an SOC, or to have unified memory, or to have many coprocessors.

Which mainstream PC chip in actual laptops/desktops people use has unified memory and doesn't use sockets/DIMMS?

>It's also been a decade that we know that instruction sets don't matter. All modern PC processors use a different instruction set and translate from x86.

That's a moot point, since they still carry all the x86 baggage in microcode, since tons of bad decisions leading to things like Spectra. ARM doesn't.


In terms of GPU cores it seems evident that Nvidia is clearly number one—disregarding any thermal limits—and AMD is clearly number two. It seems Apple is a solid contender for third place now. I don't think Intel or the smaller embedded players (Mali, PowerVR etc) have anything quite so good.

Am I missing something?


Dunno, do you want a GTX 3070 for $499 or a AMD 6800 for $580? The benchmarks come out at about the same per $ (AMD wins by 10-15%). Sure you could get the 3080 ... or the 6800XT. Pick your price point and AMD and Nvidia will have something close. Sure AMD has nothing to compete at the $1500 price point, but the reviews I've seen claim that the $1500 nvidia is just a big waste not close to justifying the price.

PS5, Xbox X/S, and Apple have been preferring AMD lately. Even Tesla seems excited by the RNDA2 cards.


Yes, Nvidia and AMD are #1 and #2. I was contemplating who might be #3 right now.


>Being a process node ahead is already "much better", and IS a feat. Where are the process node ahead chips from the competitors?

Not if you still don't beat them cleanly. The end result is what matters. Also, being able to pay more than other people isn't a technical feat.

>On top of that, it's Apple's own SoC design, it has its own GPU design and co-processors, tons of work into making speed optimizations all around (from the buses to memory handling), and came with a new OS release, a port to the new architecture, a bridge layer for iOS apps, and a perfectly capable x86 translation layer.

Half of that is software, which I didn't mention, and the rest is par for the course. The buses and interconnects in a Zen chip are far superior, which you notice as better I/O.

>Which mainstream PC chip in actual laptops/desktops people use has unified memory and doesn't use sockets/DIMMS?

You didn't mention Sockets/DIMMS, but literally any AMD APU has unified memory, and use memory DIMMs. Seems like the best of both worlds to me. Just as good memory performance, and it's upgradeable.

>That's a moot point, since they still carry all the x86 baggage in microcode, since tons of bad decisions leading to things like Spectra. ARM doesn't.

AMD is not affected by Spectre. ARM also uses speculative execution.


> they still carry all the x86 baggage in microcode, since tons of bad decisions leading to things like Spectra. ARM doesn't

> AMD is not affected by Spectre

This is not true. AMD, ARM, Intel, and IBM are all affected by Spectre (Bounds Check Bypass) and that particular variant has not been fixed in hardware, and short of disabling speculative execution or radical changes to ISA or microarchitecture, is likely to never be fixed purely in hardware.

https://arxiv.org/pdf/1902.05178.pdf


AMD's new CPUs, post FX, are not affected by the majority of Spectre vulnerability. The ones that do affect them are fixed in software with essentially no performance penalty. Less than even ARM CPUs.


* Exactly how is it much better? It's not the first chip with a 15-25W TDP. When compared to other chips with such a TDP, it's incrementally faster in single core and it trades blows in multicore. It does this at a process node ahead.*

Ok. Where can I buy a laptop with similar performance, battery life and cooling then?

Same arguments came out with the first iPhone. “It’s just a big screen! Other phones have a big screen!”


Similar performance and cooling? Buy a Ryzen 4800U laptop.

Similar battery life? A good part of it is down to the software, sadly.

For the trade-off of 8 instead of 12 hours of battery life, you'll have a repairable and open computer. Seems competitive. You also get upgradeable RAM, a GPU that is actually useful because it has drivers for a useful API, and much more I/O.


1) M1 is significantly faster than 4800U at single-core.

2) M1 is up to 20 hours of battery life.

3) Not the software since the existing Intel based Macs share the same OS.


M1 is somewhat slower in multicore, and has a fifth of the I/O performance. They are comparable.

Importantly, you're comparing the M1 in a Mac Mini, to the 4800U in a laptop. Comparing the M1 in the Air, for example, the 4800U is quite a bit faster in multicore and the single core performance advantage shrinks.

As for 2 and 3, you're comparing to AMD, not to Intel. AMD CPUs have significantly better power efficiency than Intel.

If you want to compare battery life without taking into account software, compare power consumption. The Zen chip uses slightly more power than the M1 on a laptop.


>As for 2 and 3, you're comparing to AMD, not to Intel. AMD CPUs have significantly better power efficiency than Intel.

Still nowhere near as the same battery life has been seen in any AMD laptop. Not even in ones with less perf.

Seems that you're grasping at straws at this moment :-)


No I'm not. I'm specifically considering two chips.

Those Macs have both much better software for power management, and much more battery. You can't compare chip power consumption by comparing total endurance of the system.

When you do compare the two, they're within 30% of one another.


Read the anandtech article again. 15-25w is the power consumption of the whole Mac mini measured at the wall plug. I.e it includes power supply, ssd, gpu comparable to RX560, external buses etc etc. If you want an x86 chip that beats its performance by a significant margin you have to pay more for just the chip than for the whole Mac mini.


The 4800U also includes a GPU, and 10W is very generous for SSD and RAM.

Since both are SoCs, external buses and so are done in the chip.


I would like people arguing the opposite to define "revolutionary".

The combination of the performance increase with the power and thermal efficiency in the first release for a laptop is the sort of improvement we haven't seen in a very long time.

> I'm not seeing that this makes anything fundamentally different

I guess we'll see. $1000 dollars buys a lot more laptop utility than it did a month ago, and the abrupt availability of a laptop with higher performance with far better battery life than anything else on the market has certainly gotten a lot of people's attention.

I'm not buying one[1], but I'm really curious to see the M2.

[1] I'm really torn, the atrophy of the Unix side of the OS and the increasing lockdown has frustrated me a lot. But the option of just running MacOS as a hypervisor that happens to run Photoshop may be just fine for me... I'll decide when the current machine dies.


Sure. To me evolutionary generally means quantitative improvements in expected dimensions. Revolutionary requires overturning some existing order. E.g., the American Revolution replaced the existing government.

I'd call the original Mac revolutionary, in that it totally changed how people interacted with computers. Laptops were revolutionary. Smartphones were revolutionary.

A new generation of laptop that runs somewhat longer without plugging in is not revolutionary. People will be doing the same things.


all day on-battery real-world heavy usage without significant compromises. In fact, not just without significant compromises, but about as fast as a laptop can go.


That's a nice feature, sure. But I don't see it as revolutionary. Are there a lot of people suddenly able to live or work differently because of this?


For me, a lack of constantly thinking about the battery is a big deal. I had the same thing happen w/ my phone: mine got large enough a couple years ago that it was a full 2 day battery life. So I could assume it would work all day.

Same deal here. I can go (in a !2020 world) to a customer, go from meeting to meeting, etc and it will just work.

edit: Also, it feels like we're transitioning from a world in which laptops are mostly / often used while being plugged in, to a world more like a phone. The default mode of operation is disconnected from a power source.


I agree that's an excellent step forward, but again, I'm not seeing it as a revolution. All-day batteries were always available in the Android world, so I don't think there was any revolution there. Similarly, there are plenty of laptops with all-day batteries. That list includes, but is not topped by, both M1 and previous Macs: https://www.laptopmag.com/articles/all-day-strong-longest-la...


Ah. I found my macs to have all day battery as long as I didn't actually really use it. So it could kind of happen, but only if I paid a lot of attention. I totally recognize this may not make a diff for everyone, but for me, it's moving to assuming the battery lasts all day w/o me managing it. :shrug:


It's revolutionary in the sense that apple is the one who did this rather then intel or amd?


> "Microsoft, at least, has Windows running on ARM, but it doesn't have Rosetta"

I thought that Microsoft's emulation efforts were hampered by Intel making threats, e.g. "Intel recently made an unprecedented public challenge to Microsoft and Qualcomm that basically told the latter two companies: if you ship an x86 instruction set architecture (ISA) emulator, we’re coming after you.", from https://www.forbes.com/sites/tiriasresearch/2017/06/16/intel...

If so, the blame for that really lies at Intel's doorstep as well.


Intel has patents in things like some of the newer vector instructions still - Apple AFAIK will not translate those, and will not report them as available.

Microsoft has a performance disadvantage in that the memory consistency model of ARM is much weaker than x86 sequential ordering or x86_64s total store ordering. Apple’s chips (at least back to the A12Z) support enabling TSO per thread on the performance cores, so the translated code does not need as much fencing.


Who knows what kind of weird deals Microsoft and Intel have.

Also, Microsoft historically is awful at transitions. Look at the 64 but print driver transition - complete shitshow.


I don't think it was awful. It's not failure to 64bit transition but they simultaneously force drivers to be signed.


Did Intel similarly threaten Apple? If not, why not? Even if they didn't know for sure exactly what Apple was up to with the M1, they surely could have speculated, as so many others did.


FWIW, the twentieth anniversary of the publication of the x86-64 specification was apparently August of this year (https://web.archive.org/web/20120308025559/http://www.amd.co... , via WP https://en.wikipedia.org/wiki/X86-64#History ), while MS announced x86-64 emulation support for ARM W10 at the end of September https://uk.pcmag.com/migrated-3765-windows-10/128922/microso... https://blogs.windows.com/windowsexperience/2020/09/30/now-m... .


The threat was regarding emulation. Rosetta 2 isn't emulation, it's translation.


Rosetta 2 includes both ahead-of-time translation and run-time emulation. It can run x64 jit software for example.


The jit is run-time translation still - not run-time emulation.


That is literally how performant emulators work.


At that level though, emulation/translation/simulation is blurred.


Somewhat, they're really not exclusive. You could say Rosetta is an x86 emulator that uses AOT binary translation and JIT compilation.


Can you provide a definition that specifies the difference? The end result is identical: being able to run software compiled for foreign ISA.


It doesn't emulate a whole CPU and has the simulated CPU run the code.

It translates instructions directly to the different ISA.


So why does Microsoft go the translation route instead of emulation?


Rosetta 2 is running on an Apple chip which they’ve designed to have an X86 strong memory model mode. Microsoft has to run on off the shelf ARMv8 chips.


They don't make their chips like Apple.


Doesn't the force behind that threat depend heavily on the outcome of Oracle v Google?


What is revolutionary about it? We still haven't seen Zen in the same node as the M1. Zen still scores higher in absolute terms (albeit at higher power).

It's evolutionary. The power efficiency is great, and the core design is great (wouldn't expect less from those who were Intel's and Qualcomms most talented staff), but calling it revolutionary is an overstatement.


Using an M1 Air, coming from a 16" Pro, here's the diff:

- Cost: $1400 vs $4000 (nearly 1/3)

- Speed: Air feels significantly faster, and more apps move to native, will be even moreso

- Battery: At least double, if not 3x

- Size: Less than 1/2 the weight

The revolutionary thing is doing all of those at once. Nearly tripling battery life while making a huge leap in performance, at 1/3 the cost and half the weight is simply incredible.


Comparing it to previous gen Apple devices doesn't really tell the whole story. You can get a machine from a different manufacturer with similar hardware for much much less than $4000, and it wont have totally broken cooling[0].

[0] https://www.notebookcheck.net/Apple-MacBook-Pro-15-Core-i9-s...

Bear in mind that I'm not arguing that the m1 macbooks aren't a really compelling option. I'm just saying this is evolutionary, not revolutionary.

(Making other laptop/desktop manufacturers and OS devs wake up to non x86 architectures could be revolutionary for competition and pricing though, we'll see.)


For years we've been evolving. I've owned ~6 laptops over the last 10 years, all Intel Macs. Each year you get about ~20% ~performance increase for the same "costs" (weight, battery, price).

This year you get about 100% performance increase and the costs are all halved. Ok, so... we'll call it a "huge" evolution. All I know is I've been waiting since the air came out for this laptop to exist: fast enough to handle all my work, with decent battery life. It's been almost a decade of waiting, and not only did it finally come, but it somehow is much faster than even the biggest of last generation and super cheap.

If you made a graph showing (price * perf * battery-life * weight) you'd see a nice evolutionary curve over the last decade, and then this generation you'd see a huge jump - way more than double. You're ignoring the fact this is a 10W CPU doing more than 28W, fanless! And the diff in subjective performance isn't small, it feels bigger than any previous generation jump by quite a lot...


AMD laptops are also crushing this generation of Intel laptops. AnandTech's comparison shows that M1 vs AMD is mostly a power/performance trade-off at the moment.

Not that the M1 isn't exciting especially in the ultrabook segment.


What's the watt / battery comparison?


Why are you using Apple’s discontinued 15” model as a benchmark here?

The 16” MacBook Pro most certainly does not have those same cooling issues.

But that’s the thing: Apple had to make their flagship performance laptop bigger including a larger battery than its original design in 2016 because Intel was never able to hit their roadmap goals.

That doesn’t sound like progress to me, it kind of sounds backwards.

In other words, Apple couldn’t give you a reason to buy their performance laptop four years later without making it physically larger. Because if they didn’t do that their customers would only have minimal incremental improvement to compel them to consider a newer system after four years.

In contrast, a four year old iPhone is over twice as slow from a raw performance perspective than the current model.

So yeah, I could buy a much cheaper PC laptop with proper thermals and cooling for much cheaper than a 16” MacBook Pro to get that fantastic performance. Maybe it’d even be a lot better than the M1 Macs.

But I can’t buy a Windows laptop, for any price, that has no fans at all and manages performance and battery life of the MacBook Air.

Apple’s throwing a CPU into their *cheapest laptop that is marketed for basic usage (students, home users) that benches faster than basically all but AMD’s most performant chip, a chip that would probably get you pretty terrible thermals and battery life in a 16” MacBook Pro.

A lot of evolutionary products were, in retrospect, revolutionary. Was the guts of an iPod or Walkman or iPhone or IBM PC or Honda Civic all that different from conventionally available parts? No, consumer electronics almost never have anything particularly exotic going on.

But it’s the whole package that can be revolutionary. The Civic was a dirt cheap car with excellent fuel economy and reliability. Is it all that different from other cars? No, not really, but the entire Japanese auto industry was a disruptive product in the 1980s.

The rumor mill is already telling us that the 16” MacBook Pro may ship with a 12 core (8 big, 4 little) “M1X” or “M2” chip. If the tech media landscape is head over heels for the M1, this new chip is going to make serious waves, because if you double the performance cores in the M1 it’s basically going to be untouchable by anything that can remotely be considered a laptop.

If Apple doesn’t touch the form factor of the 16” MacBook Pro it’s going to have similar results: it’s going to run quieter, cooler, and longer.

In short, it’s just my own opinion that calling this bit of incremental improvement evolutionary rather than revolutionary is something of an understatement.


> Yes M1 is great. But it's a closed system ... > Apple did something truly revolutionary

While I don't deny they deserve a lot of credit for this, I think it's important to keep in mind that in many cases Apple does this by exploiting its lack of openness. ie: these things are not unlinked.

So where some people see Apple innovating and achieving breakthroughs, I also see them exploiting and leaching off an ecosystem in a toxic way. It's like a chemical plant that dumps its waste into the local river - they get a free ride (not complying with interoperable / openness) which is fine if they are a small part of the ecosystem. But if every company was like that we would be living in technology hell. There would be no internet, no ecosystem of reliable affordable server infrastructure, there wouldn't be a single phone you could write an app for if it wasn't in the corporate interest of a mammoth entity, etc etc.


Microsoft does have their own Rosetta, which has been able to run x86 apps for a long time and will be able to run x64 apps within the next few months in preview builds


Microsoft’s solution is emulation, not binary translation. This is a much slower approach than Rosetta. Hopefully this gives MS the kick in the pants it needs to step up their game though.


IIRC Microsoft did ahead-of-time binary translation to get PowerPC Xbox 360 games running on the Xbox One. Wonder if they’ll do the same here.


JIT vs AOT would be better term.


Has Intel really had any response even for Ryzen?


Their response was probably more along the line of previous "advertising campaigns" [1] (ie, payola) for the big PC manufacturers to avoid using AMD for their high margin items.

They can't do that with Apple.

[1] https://www.theatlantic.com/technology/archive/2010/07/dells...


Depends on the price point. Largely Ryzen is winning at the moment but Intel has some compelling options at the low end. E.g. the 10100f looks pretty good if you're not wanting to spend much.

Of course AMD haven't released the 5000 series options on the low end yet, so you can only choose between it and a ryzen 3000 series CPU.


Tiger Lake-H and Alder Lake sound pretty good but they haven't been released yet.


Tiger Lake laptop processors have been released and they are competetive with ryzen 4000 mobile processors. All these comparisons of the M1 to 10th-gen intel cpus are quite unfair when the 11th gen 10nm chips are on the market now and have perf characteristics much closer to the M1

Disclosure: I just chose a Ryzen 4500 budget machine, and the 1135G7 is comparable.


The M1 as a computer is too locked down for me, and it seems like for you. But the M1 as a demonstration of the power of modern ARM chips is absolutely unmatched. I have seen a lot of people use the lockdown argument to justify a disregard of ARM overall without realizing that no, the open version of this technology will be incredible.


> the M1 as a demonstration of the power of modern ARM chips is absolutely unmatched

I feel like it's more a demonstration of Apple's competency in vertical integration, cultivating talent, reducing the impact of politics on product/technical quality and ensuring productivity within and between technical teams. Aspects where Intel has abjectly failed and hasn't found any solutions for.

Intel has tried repeatedly to push into new areas, but they've failed every time. Meanwhile, Apple has somehow figured out how to push into new market segments successfully in a crazy consistent way and the M1 is a clear indication of how healthy and functional they are as a business. Like, when was their last real failure?


Apple is absolutely leading on this front, starting in the phone space. That being said, I don't think their lead over other distributors is especially large. Apple has taken ARM seriously, and built performance chips in their mobile devices that if not for thermal constraints would handily compete with x86 systems. That being said, the same is true of a Snapdragon. Apple ARM, and certainly M1, is better than the competition, but that's because no one else has looked for performance from ARM. The chip designers haven't really been pushed yet. Now that M1 has demonstrated what can happen when the technology is freed from the phone, we'll see people catch up.


I believe Apple has significant advantage to design CPU cores compared to Qualcomm. They acqhired PA Semi team and has been designing great performance cores continuously compared to ARM Cortex (Especially for Single threaded). Qualcomm no longer develop own CPU core since ARMv8 transition.


Oh I'm not against ARM at all. In fact I have many systems running it :) Raspberries but also a Pinebook. I'm also getting an M1 mini from work.

I just don't like the almost-religious overhyping of everything Apple. They're doing a great job. They're not perfect.

For example: Moorhead makes a good point IMO. Yes he's talking about software issues. But in the case of Apple, you can't separate those two. The OS doesn't work on other hardware and (in this case) the hardware doesn't work with other OSes. This works to their (huge) advantage in some cases, being able to optimise each component perfectly. But it can be a drawback too like it is here.


> For example: Moorhead makes a good point IMO. Yes he's talking about software issues. But in the case of Apple, you can't separate those two. The OS doesn't work on other hardware and (in this case) the hardware doesn't work with other OSes. This works to their (huge) advantage in some cases, being able to optimise each component perfectly. But it can be a drawback too like it is here.

Moorhead didn't make that point.

Moorhead made the point that a small amount of software that's mostly rarely used on the Mac platform isn't working perfectly. And even the mass market software he's pointed at, like Adobes is already in beta test with native versions about to be released.


> I just don't like the almost-religious overhyping of everything Apple. They're doing a great job. They're not perfect.

I'm in the same boat as you and the other OP up-thread. I'm amazed by what M1 has accomplished, but less so because of what it represents for itself (and for Apple) and more that it shows ARM is capable of.

As an architecture, ARM is impressing the Hell out of me as of late. Sure, it's not touching Intel and AMD in terms of raw performance, but once you start looking at the performance-per-watt, the story is absolutely different. x86's power budget is looking awfully bloated by comparison.

What happens if NVIDIA's acquisition of ARM goes through? Do they start producing their own CPUs?


In your quest to avoid almost-religious overhyping of everything Apple, I'd contend that you're presenting the inverse: almost-religious anti-hyping of everything Apple. We can criticise them for their seemingly blinkered focus on casual consumers and end-users at the expense of developers and technologists without finding ways to rationalise away anything good Apple does.

Attributing the positive aspects of M1 to the ARM micro-architecture is contemporary revisionism. If micro-architecture was the key differentiator, Microsoft Surface RT would have been a runaway success and Qualcomm would have taken over the notebook CPU market by now. Little of what makes the M1 so impressive has anything to do with the ARM instruction set. The impressive parts are things like the entirely in-house GPU and the mild hardware acceleration of Rosetta 2.

It's also worth noting that Apple has been deeply involved in ARM since the late 1980s. They were a co-founder of Advanced RISC Machines Ltd. along with Acorn and VLSI. Apple were deeply involved in development of the ARM6 architecture and the Apple Newton was one of the first major commercial applications. So even if you limit your amazement to the ARM architecture, a portion of that credit goes to Apple anyway.


I appreciate the correction.

> In your quest to avoid almost-religious overhyping of everything Apple, I'd contend that you're presenting the inverse

I must point out that I believe you misinterpreted my intent. I'm largely indifferent to Apple. I'm not entirely sure if you'd find that more offensive or less since I'm interpreting this response as one of annoyance (I'd appreciate a clarification to this end).

I don't use Apple products. And you're absolutely right, the M1 is an impressive feat.


Understood. Intent is often difficult to robustly convey without being neurotically verbose.

I promise my response was not borne of annoyance. I just got the impression that you wanted to focus credit on the one thing about the A1 SOC which could plausibly attributed to anyone other than Apple. That seemed a stretch, is all.


> without being neurotically verbose

Unfortunately, I know what you mean. I have a tendency toward verbosity that makes it feel that every time I write a post I'm crafting a treatise that might make the great statesmen of old blush (with the exception that I'm far too stupid to make it as interesting or entertaining).

And yes, intent is difficult to convey through text even if you're afforded a great deal of time and effort.

> you wanted to focus credit on the one thing about the A1 SOC

No, not at all. The ML cores and other, err, accessory CPUs they've added are incredibly fascinating. Mostly I was thinking of the ARM portion of the hardware.

Truthfully, I confess that part of my excitement lending itself to ARM is because I genuinely want to see more vendors follow Apple's footsteps and either a) craft their own designs or b) build out ARM-based platforms that will lead toward greater competition in the desktop/laptop markets (or both?). Fortunately, Apple's tendency toward setting market trends gives me some hope that we'll see noteworthy contributions that embarrass x86 and upend the idiotic convention that I think M1 has finally broken. Namely that you can have a chip that works on an incredibly tiny power budget or you can have performance. Apple's proven we can have both, and I think that's a good step forward. It excites me that we may actually see viable competition this decade, and if we do, I'll absolutely credit Apple with upending conventional wisdom and proving that our fixation on x86 for "performance" was myopic and unnecessary.

Maybe that explains my reasoning a bit more. At the expense of being unnecessarily verbose (though you may appreciate that). :)

Also, I owe you an apology. I wrote my reply out of some frustration last night. Regrettably I allowed myself to get sucked into a "discussion" with someone I didn't know was an antivaxxer until I realized their refusal to listen to an explanation of microbiology wasn't born out of lack of interest but rather suspicion. When I saw your post, I genuinely believe I unfairly took some of that frustration out on you. I'm truly sorry about that.


The explanation is appreciated. We’re all human and passions drive us in weird, occasionally unpredictable ways. I’m undoubtedly as guilty of it as anyone.

Also, fuck those anti-vaxxers.


> We’re all human and passions drive us in weird, occasionally unpredictable ways.

Very true. It's just our nature, and we're all fallible beings in some way or another. Lord knows I'm at the top of the fallibility list.

> I’m undoubtedly as guilty of it as anyone.

I am as well; more than most, in fact.

> Also, fuck those anti-vaxxers.

Agreed!

I didn't realize that was their opinion at the time, so I shared a bunch of medical literature in the hopes of supporting my argument. What I didn't realize was their opinion of any supporting evidence was dismissed as clearly "paid for" by, uh, $LARGE_FACELESS_ENTITY.

While I'm sure that's true in some cases, the mRNA COVID vaccines are fascinating because the principle has been around since 1989. I'm hoping it works, not specifically because of SARS-CoV-2, but because it shows some possibility as an anti-cancer vaccine. We just need to prove the delivery mechanism is functional.

I really want it to work so that we can manage or eliminate other diseases as well. I think that's why I was so frustrated/disappointed.


> I just don't like the almost-religious overhyping of everything Apple. They're doing a great job. They're not perfect.

That may be so, but again that doesn't really apply to the M1's performance as hardware. If the article was about Apple as an investment, or Apple as the designer of the next computer for me, I'd agree that taking into account their software would make sense. But the whole point of the article is that equating Apple's being Apple with the M1 being a badly designed chip is wrong.


M1 is really great but I don't like to see the presentation style like "n% faster than top selling laptop!" (What model, workload, situation, etc...?). Please just show benchmark numbers and models.


Completely agree with regards the closed platform.

I switched to Linux for my development machine for a few years now. And, using something else than i3(or any other tilling window managers) seems like a step down. I like instant switching between workspaces, splits, no animations.

And I'll be looking for a good laptop next year, when everything returns back to normal, and I might have to travel. And will put Linux on it.

And agree with regards to aggressive marketing, in my case it makes the brand a bit toxic for my taste.


I live a double life: Apple for laptops and Linux for desktops (and I'm mostly a Desktop user who manages a lot of Linux servers).

Both platforms have their pluses and minuses. I like to carry ideas between them for my development and environments. Found this extremely fruitful for me over the years.

We ordered a bunch of M1 airs for portability and power efficiency. Will see how it goes.

I personally love both Linux and Apple for laptops but, for the out of the box experience and reliability, Apple is more polished due to integration. My EliteBook has no problems with Linux so far but, when a flawlessly running driver degrades after an update (cough, intel e1000e, cough), it leaves a bad taste in the mouth. OTOH, I'm firstly a Linux guy who prefers and writes GPL software.

Of course YMMV.


Yep. Mac's look like a comfortable unixy environment. Preinstalled zsh, a fast terminal emulator (item), gnu utils. I'm sure I could make it work as a dev machine.

I have to admit that m1 macbook air battery life is compelling. A long time ago I was thinking about a perfect laptop that would be very light and have a 20+ hours of battery life. This comes close. Even though I have never used a laptop more than 5 hours on battery. And that happens a couple of times a year.

Hope you'll enjoy your m1 air.


Thank you. I’m really excited about it. Using a different CPU architecture is always exciting. :)

Using Mac is like driving an Audi S8. Comfortable, smooth, powerful. However, it's not your hand tuned Impreza which can read your mind while going above 200Km/h.

Even an Intel MacBook pro can go a long time on battery (I have a personal Mid2014, configured all-out). I've developed some system-abusing scientific code on it and, it really delivers.

The trick for me is, I can make a Mac dev machine without modifying it with homebrew, et. al. I install a Linux VM to VMWare, give it a static IP, install all required tools, servers and services on it. On the macOS side, I use Eclipse for IDE and use clang/llvm for compiling (since I also aim to code which behaves same on both gcc and llvm).

I develop on macOS and interface it with Linux VM via network if it's absolutely necessary. Linux VM also hosts some heavy tooling like LaTeX which cannot be installed/updated on macOS very cleanly (I know macTeX. It doesn't play nice with newer macOS, due to Apple's locks on the OS).

Then everything becomes transparent for me. Pull -> Develop -> Push in either direction. Eclipse already syncs itself via oomph and cloud. Code is portable, environment is same. It also ensures code compatibility, allows me to see compiler effects and run tests on many platforms.

As aforementioned, it also inspires me to write better software. My code carries Apple's sensible defaults and it-works mentality with Linux's flexibility options. This approach allowed me to create a series of utilities for a project in limbo. These small no-setup utilities saved a lot of labor and ultimately saved the project. So yes, apple is a walled garden and it's not optimal but, they do a lot of right things. We can selectively carry them to more open platforms to make them better. Similarly, open platforms' flexibility can be carried to some macOS applications so, the environment better accommodates power users out of the box.

Homebrew and other tools are nice but, I don't like to shoehorn stuff which doesn't fit natively into an ecosystem.


> I know macTeX. It doesn't play nice with newer macOS, due to Apple's locks on the OS

I’ve been using LaTeX from /opt/local with zero integration issue, even on Big Sur. What are the issues you’re alluding to?

Otherwise, I agree with your take in general, except that I do everything you do in a Linux VM directly in macOS. I develop and run my codes on a Linux workstation and an iMac, and going back and forth improves the codes a lot. Performance and usability issues are spotted much earlier.

I much prefer Macports’ approach than Homebrew’s. It is nicely self-contained in /opt/local, does not break when a system library is updated, and the file system layout is much saner. Just update your environment in zshrc and you get all the benefits whilst not changing anything for all the stuff that you don’t run in a terminal.


In the past, MacTeX team had a problem with a particular OS release (when SIP was released and enabled) and, I was in the middle of my Ph.D. At that time, had no time to wait for problems to resolve. I got VMWare, installed Linux, tuned my LaTeX environment and never looked back.

I'm sure that all the problems are resolved by now but, my workflow is mature now and everything is working flawlessly. Considering I'm going to need Linux anyway, did no efforts to move my LaTeX workflow back to macOS again.

Since the code I'm developing is going to be used in a lot places, I'd rather develop it in two distinct environments and run tests on each. Also, I like to experiment with different development tools on different environments. Experimenting and experiencing each environment broadens my horizon. Also it's more enjoyable IMHO.

Didn't play with Macports TBH. I don't think I'm going to use it but, will take a look to it.

Another thing is, I don't customize/change my terminals much. When you manage 1000+ servers with a team, customizing each terminal to your liking is not feasible so, I can work pretty fast with stock bash or anything. I'm old school and don't like flashy console setups anyway. :D


> In the past, MacTeX team had a problem with a particular OS release (when SIP was released and enabled) and, I was in the middle of my Ph.D. At that time, had no time to wait for problems to resolve. I got VMWare, installed Linux, tuned my LaTeX environment and never looked back.

I feel for you; the middle of writing up a PhD is about the worst time to have this type of technical issues. I remember being afraid of any update back then.

> Didn't play with Macports TBH. I don't think I'm going to use it but, will take a look to it.

To me, Macports is the closest to a sane package manager like on FreeBSD or most linuxes. There is practically no learning curve if you’ve already used one. But yeah, it’s not flashy or cool like Homebrew and it wants to use the system software as rarely as possible so it will reinstall zlib etc. I think the trade off is acceptable because then everything is more stable and predictable as the exact libraries used are known and tested, and won’t be broken by an update.

> Another thing is, I don't customize/change my terminals much. When you manage 1000+ servers with a team, customizing each terminal to your liking is not feasible so, I can work pretty fast with stock bash or anything. I'm old school and don't like flashy console setups anyway. :D

Yeah. In terms of looks, have simple settings to see at a glance if I am on my local computer, a workstation over the LAN, or a cluster somewhere else (I like clean terminals). I have one git repo with settings files and zsh modules though. So I spent quite some time fine tuning everything, and all the computers I use behave the same, whether they run macOS or any Linux distro (or even Cygwin, actually).


> when a flawlessly running driver degrades after an update (cough, intel e1000e, cough), it leaves a bad taste in the mouth.

Hah! I see I'm not alone. For me it was the iwlwifi drivers. In one kernel revision, they worked fine; in another, they'd randomly panic and bring down the interface. Downgrading the firmware packages didn't do anything. Kernel upgrades have worked as of late.

Though, it may be hardware-related in my case. I've had nothing but trouble with that particular card. I was going to replace it, but it seems that Lenovo really likes Loctite on their M.2 screws and it won't budge. Tempted to use a soldering iron to melt the thread locker...


Intel went through a phase where they managed to break their graphics and network drivers at the same time.

I thought whether my hardware was failing but, when they broke three different NICs with the same update, I understood what happened.

Hope they don't do it again. I'm not going to use their processors in my next build but, I tend to like their NICs.


> I thought whether my hardware was failing but

You're now making me question whether this is a problem with the wifi card or the driver. I admit I'm still convinced it's probably the card (I have an earlier revision in a ThinkPad that works fine), but now I'm not so sure!

Curious: Are you talking about that "little" issue where the i915 module was randomly causing a panic? I ran into someone who had some really perplexing issues that a kernel update later resolved, and their dmesg was strongly hinting (to me) that it was linked to the i915 drivers. That, and a bug report I managed to dig up.

And I also agree. I'm looking at a Ryzen build, too. I have a couple of Intel NICs in my file server though and they're better quality IMO than the Realtek cards.


Actually it can be anything. My mother owns a HP Spectre X2 convertible with an Intel WiFi card. Only the driver bundled with the Windows works. Any newer version via Windows Update or Intel breaks the card after a standby-wake cycle. Both updated drivers support the card on paper but, either the driver has a problem or the way the hardware designed on that particular computer messes some stuff up. BIOS upgrade didn't fix anything too.

> Are you talking about that "little" issue where the i915 module was randomly causing a panic?

No, I started to see some lines in the display like a faulty GPU draws. Sometimes a line, sometimes small corruptions were visible but, it was always fixed itself after a minute or two. The laptop in question is extremely low power so, nothing gets hot.

Newer kernel updates fixed these issues too.

Intel's mid-range and higher-end NICs have proper offloading and processors so, they neither tax the system much or fail to reach their advertised speeds. They're real cards for real loads, so they're more reliable AFAICS. A Realtek card works reasonably well for most of the light loads but, load it up with sustained high load, it cannot saturate the interface as it should.


> Any newer version via Windows Update or Intel breaks the card after a standby-wake cycle.

hahaha that's insane.

I've seen some weird things come out of HP machines in the past, so somehow none of the above would surprise me.

> No, I started to see some lines in the display like a faulty GPU draws.

Ah, interesting. There was a drive bug in the i915 module that could potentially cause a panic on certain hardware. I never experienced it myself, but I ran into someone who had a similar issue. I don't think their problem was tied to the i915, but it was also resolved by a kernel update.

> A Realtek card works reasonably well for most of the light loads but

Yeah, and the Realtek chipsets are much more cheaply designed. Although, what puzzles me is that most of the Intel chipsets aren't that much more expensive (depends on vendor, of course, but you can easily find off-brand dual-port cards for ~$40USD with an Intel chip). It really makes it pointless to invest in Realtek cards that are both cheap and may or may not work at all.

I remember having an issue with the onboard one in my desktop where it would randomly disconnect after negotiating 1Gbps. If I forced it into 100Mbps it would be fine. But, as with the earlier discussion(s), that was also resolved with a kernel update. I used an Intel card for a few years because of that reason, so I have no idea when it was eventually fixed.


> hahaha that's insane.

Moreover, out of the box driver is a stock intel wifi driver. It's not vendor-specific. Phew.

> I remember having an issue with the onboard one in my desktop where it would randomly disconnect after negotiating 1Gbps.

That's how the new driver broke one of the NICs. Other ones were just failing to detect carrier at all. The bug you're referring is [0], which I also added some feedback.

[0]: https://bugzilla.kernel.org/show_bug.cgi?id=205047

Realtek's cards are not bad. They're not slow (in terms of latency). They're just not suited to sustained high loads (~60% of interface capacity).


> The bug you're referring is [0], which I also added some feedback.

Possibly not, unless I'm misunderstanding. The bug was seen in a Realtek card in 2011-ish.

I'm suspicious that might've colored my opinion of Realtek early on and why I still buy Intel cards to this day. Well, depending on the motherboard, of course.


Oh, then Intel actually caught-up with Realtek and broke their cards the same way for some time then. :)

Realtek's 8139's first revisions was not able to perform well in most cases (~30mbit max speed, unrealiable). 8169 is much much better. Their new wireless cards also work very well if the driver is good, but they have so many sub-models with very different feature sets, so you need to get the exact chip for your needs.

I use a 8111 (PCIe, Gigabit) at office for desktop to laptop bridge and it works fairly well, no stability or speed problems for now.


> Oh, then Intel actually caught-up with Realtek and broke their cards the same way for some time then. :)

Hadn't thought about it that way, but that's hilarious. Realtek's such a trend-setter.

> 8169 is much much better.

Just checked, and apparently that was the card in the machine I had issues with. I think part of the problem is that it was new at the time. So, I can't really fault the drivers per se. I never had any issues with it after they were fixed.


I have a Macbook and used to have a Hackintosh for my desktop. But this ARM move made me move away from that solution on the desktop.

I tried using Windows with WSL2 first but there was just too many workarounds I had to do for my workflow.

I then looked into the most popular Linux distributions and chose Pop!_OS. Suffice to say, I am really satisfied with it. It's probably the Linux distribution that I feel is the closest to MacOS that I have used.

I am interested in the M1 Macs but my 16" MBP is still fine. I will look into the ARM Macs next year to see what they offer.


The last I heard, Linux is not coming to M1 unless Apple opens up on technical specifications.



The question is how far he can get without Apple's documentation. On the CPU side, probably pretty far as they use the standard ARM ABI. On the GPU side? This is where things will get really hairy. Same with the machine learning cores.


I'm not planning to get a macbook. My main contender would be an XPS with a new ryzen cpu on 5nm, if they'll have one next year, or a thinkpad again with Ryzen 5nm. I would not accept anything less.

But thinking about it, it might be that the mobile AMD CPUs are a year behind the desktop ones. So next year the best ryzen could still be at 7nm Zen3. In that case apple will have a 2 year transistor lead. But, will see.


Is there any statement Dell will build Ryzen XPSs?


Only wishful thinking from my side. As mentioned in the post, I hope that the M1 performance and aggressive apple marketing will push premium pc laptop manufactures in adopting the best performance x86 CPUs. And I also hope that AMD will seize the opportunity to get a good chunk of mobile market.

Of course, next year laptop models are probably being finished right now, and it could be that intel still found a way to bribe premium manufactures from including AMD in their premium models.

In my opinion they'll lose marketshare if that happens. But, there are most likely smart people with a lot of more information than me making these decisions.


Linux should be on it shortly, running in a VM environment.

That won't satisfy the purists, but it should satisfy the practical in most cases.


    Yeah John Gruber has really become a shill 
    over the last few years. 

    [...] 

    Yes M1 is great. But it's a closed system.
You can like Gruber or not, and that is your choice. I agree with him more often than not, but I certainly don't always agree.

However, while the "closed system" thing is a valid criticism of Apple, I strongly disagree that its omission from Gruber's article is a valid criticism of his writing.

Gruber writes for a specific and informed audience.

He is not writing articles for people who've never heard of Apple, or people who may not be aware that Apple runs a fairly closed ecosystem. He assumes that you know all of that.

Whatever you think about his writing as it exists today, imagine if he had to preface every single blog post with background information like that? Should the first 1,000 words of every post include a brief history of Apple and a discussion of open vs. closed ecosystems, and where Apple falls on that spectrum?

He knows that stuff. He knows that we know that stuff.

What's the last article (or speech, or email, whatever) you wrote? Surely, when deciding what to write and what not to write, you considered your audience and their level of context and domain knowledge, as well as the constraints of your medium?


I've read pretty much all of Gruber's blog posts for years. I'm very much an Apple person myself. I wouldn't call him a shill, but I do have to agree that he's become lazy about some of his opinions. He often bashes Facebook (which certainly deserves criticism) without ever using it. Many of his Apple reviews are very long-winded love letters to Apple about how great their products are. Every once in a while he will call out blatant Apple mistakes, but they have to be pretty bad.


>John Gruber has really become a shill over the last few years

Only a grain of truth to that. He has always been known, as far as I am aware, merely as an Apple fan who blogs about it. There may be some "shilling", to the extent that he probably has Apple stock, but it is also done out of conviction.


> Apple performs many tricks to get such long battery life

I always assumed this is why Apple doesn't allow actual third-party browsers on iOS.


> The author is bashing another author for not doing measurements

That is a poor characterization of this article. Gruber's critic of Moorhead's article doesn't reference a lack of benchmarks or measurements at all.


Gruber has also cited plenty of benchmarks in other posts. So he didn’t on this one. So what?


>The author is bashing another author for not doing measurements, yet himself declares that m1 beats all the other CPUs, without providing any measurements.

That's because there are plenty of measurements published, which agree with the author, and disagree with the "another author".


The published benchmarks are mostly "meh" when it comes to anything at the prosumer level and above.

The insanely good thermals and 5mm process make M1 a great laptop, but middling performance and a lack of expansion only makes it decent mini-desktop for family use.

If the rumors of the M1x prove true, that might be the first real pro-level ARM CPU.

For pro-level performance, it's roughly matching AMD's three-year-old entry-level Ryzen 7 1700:

https://openbenchmarking.org/vs/Processor/Apple%20M1,AMD%20R...


>The published benchmarks are mostly "meh" when it comes to anything at the prosumer level and above.

Which makes sense, since those are the lower-speced Mac machines (Mini, Air, 13" Pro). At the level at which they compete (similarly speced Intel Mac sold until recently, Surface machines, PC laptops of the same price) they obliterate them.

"meh at the prosumer level" for a consumer level machine is kinda like saying that an AMD Threadripper / 64 cores is meh as the mainframe level.


If the 13" Macbook Pro is not for Professional work, then I guess that makes sense.


The Pro in MacBookPro is a brand name, not a product category.

It means "it's intented/speced/priced for above the casual home user / casual student".

It doesn't mean "it's for cpu/gpu hungry IT/video/rendering/data crunching/etc work".

A banker, a VC, an accountant, a small shop owner, a graphic designer, a web UI developer, a dentist, and so on and so forth, are also professionals.


It makes sense. Just as the iPhone 12 Pro is not for "professional" photography.


Depends on the profession.


A 10W SoC roughly matching a 65W CPU... That's bonkers.

Those benchmarks also aren't differentiating between the fanless MacBook Air, the actively cooled MacBook Pro, or the wall-powered Mac Mini, all of which have different performance characteristics with the same chip.


What I wrote: "The insanely good thermals and 5mm process make M1 a great laptop"


Sometimes Gruber calls people out for their bad commentary and it comes across as being superior. This time, though, this article was very much needed.

Moorhead’s article got a lot of traction, and ‘felt’ serious enough that a lot of folks would be dissuaded from the M1 Macs.


> With Zen3 on mobile, giving a similar 20% boost as it did on desktop, plus a jump from 7nm to to 5m, giving another 20%, next year might be a very good year for Ryzen laptops.

You're making the classic mistake of comparing what might be coming at some undetermined time in the future to what you can walk into the store and buy right now.


You can walk into a store, buy a Zen 3 processor right now (if you're lucky, they're in high demand), fiddle with the clocks and voltages to simulate (poorly) a laptop chip, and notice that, right now, at the same power, a Zen3 core at 7nm HPP is comporable in single core performance to a single M1 big core on 5nm LPP.

So right now we know that Zen 3 is more or less equivalent or even ahead in architectural power efficiency. You do have to wait for 5nm LPP to see it pull ahead or use it in a laptop, yes. But the statement that M1 is well ahead of the competition is clearly false. It's competitive.

As for multicore performance, it's already matched by Zen 2 parts.


So why can't we walk in and buy a Zen 3 in a laptop right now?

.... because the thermals and engineering have not been designed/completed yet. Sometime in the future they will be, which is great. But that's the future. I'm still talking about right now. I like to buy things that actually exist and work today, not things that hypothetically work well in the future.


There's a difference between Zen3 desktop chips and Zen3 laptop chips. The former has 2 pieces of silicon and the later has 1. They just aren't shipping yet, apparently some announcements are due at CES in January, it's unclear if there will be volume available then.

Much of AMD's current fab capacity is going for PS5 and Xbox X/S CPUs and of course the rather popular zen 3 desktops.


Well, no. The reason you can't buy one right now is because there is no 5nm TSMC capacity left to build them.


But you said I can get a 7nm one that is pretty close to the same thing. Why don't they sell me one of those in a laptop ready to go?


Because it would be a monumental waste of money to develop a 7nm version of the laptop chips for what, 3 months?


But surely the 7nm part has been around long enough, they should have been designing and validating it months before the part even existed...


Well, no. The 7nm part is specifically meant for desktop computers. The part meant for laptop computers, that has graphics, uses a different memory controller, has different I/O characteristics, and uses very different cache structures, was always meant for 5nm only.

They both, however, share almost the exact same CPU logic, which is why we know that performance characteristics will be essentially identical.

You could ship a laptop with the 7nm chip, and it was done with Zen 1, but then you will have very high RAM power usage, you will need dedicated graphics with Non-unified memory, you will miss out on some accelerators, and you will some I/O issues

Because of all these problems, no one thinks it worth it to design a laptop against the 7nm parts, and it's not possible to build a 7nm version of the laptop part without very high investment.


Regarding hardware, will Dell be able to compete with a MBA at the $1000 price point? I doubt it.


If it will cut into their sales they might have to adapt.

Also, from what I heard, while intel was on top, it was charging outrageous margins on its CPUs. AMD mobile were said to be hundreds of dollars less per cpu. The manufacturers might choose to pass on these savings.

Of course, if AMD becomes the only game in town, they might jack up their prices as well. We'll see.


I didn't really think about it at the time, but Nvidia buying ARM makes even more sense now that Apple has shown what can be done with ARM CPUs. Not necessarily because Nvidia wanted a CPU, but because otherwise, they'd be stuck providing GPUs for Intel CPU systems. AMD can bundle Radeon GPUs and Apple has the M1 CPU/GPU. That means that for a system vendor (Dell), the only cost effective place to use an Nvidia GPU would be in an Intel system. And if those are already more expensive, then that market will be shrinking.

However, with Apple showing the viability of ARM for desktop / laptop use, this opens a whole new market up for Nvidia/ARM. Apple has already shown it is a viable approach hardware wise, so half of the work is done.

So, an ARM powered Dell laptop might not be too far off.


You're right, it could be that Nvidia will push for more beefed up cortex CPUs, with the same tricks employed in m1, like larger L1/l2 caches, increasing the die size, and add more complex pipelines, put the ram on the soc, and an Nvidia gpu.

The potential problems are.

- This might take some time, if it's not already in progress. And pc manufactures might not afford lower sales for a few years while this happens.

- Windows seems to still have issues with app portability and emulation. But who knows, maybe MS will push harder

- Making their own chips and also licensing cores to vendors like Qualcomm, might spook them. It's not a good position to compete with someone who owns the IP you depend one. This might be a good thing for RISC-V

But we'll find out in a year or two what is the play. Competition is good for everyone. That's why I don't like apple having exclusive access to new nodes, be it because apple is willing to pay premium, or that other companies avoid low yield.

Anyway, seems like TSMC will have their hands full for a few cycles.


> And pc manufactures might not afford lower sales for a few years while this happens.

What do you mean? What other options do they have?


The other option is to use the best available x86 mobile CPUs, and these at the moment seem to be from AMD.

From what I've seen even current AMD CPUs are competitive with M1. Ryzen 4900HS has a fair lead in multicore performance and close single threaded. Of course, all this at a higher power and fans running as a jet engine.

But AMD is one process node behind, and one architectural node behind their desktops (Zen 2 on 7nm).

So, OEMs can use the perf gains from both the advancements to lower the TDP a bit,(ex 10%), put a larger battery. And this can result in a competitive product, even with the rumored M1X, since Ryzens already have the multicore lead.

That being said, from I read, the next mobile Ryzens will be 5000 series, released in a month, at 7nm. So only a ~20% IPC uplift is expected. Also probably TSMC's 7nm improved a bit. Nothing to scoff at, but not as exciting as a new process node.

Anyway, we'll see soon enough.


Nvidia isn't stuck on Intel, they're selling AMD CPUs as part of their DGX A-100.

That said, it's clear that they're hoping for more vertical integration in the same way that AMD already has.


So in short, CPU and GPU manufacturers will have to do vertical integration themselves to compete.


> With regards to the M1, I'm grateful for a good competition. As this will likely push premium laptop manufactures (Razer, Dell) towards ryzen CPUs, as opposed to using them in cheaper models. Loosening the grip that intel has on them.

In the past this has not been the case due to Intel's anticompetitive practices and while I think it's good to be cautiously optimistic, I would also not be surprised to not see no-compromise flagship notebooks with ryzen in them from the larger pc manufacturers.


This is a unique situation though. Intel is now years behind in their process nodes. Ryzen right now is already significantly better than any Intel CPU. M1 has been released which trounces all of those and PC manufacturers now have to compete with Apple in a way that they haven't really before. I don't remember Apple ever being this competitive in performance and they're going to take the lead once the Macbook Pros are released. The only way for manufacturers to keep up is a top-to-bottom switch to Ryzen.

So I do think we're going to see a big shift towards Ryzen, since I'm not sure how Intel could possibly entice manufacturers to stick with them when they haven't been able to deliver in years.


> As this will likely push premium laptop manufactures (Razer, Dell) towards ryzen CPUs, as opposed to using them in cheaper models.

For that to happen, AMD needs to be able to ship high volumes, and keep that promise. Right now, they can’t deliver, most of the volume booked at TSMC goes towards the console SoCs.


What are the prospects for a hypothetical future fanless Ryzen laptop?


Intel's 10nm is approximately equal to TSMC's 7nm (even having advantages in some areas). The A-chips at 7nm (the A13) were already significantly ahead of Intel's offerings in efficiency and instructions per cycle. Intel went all in on a design that relegates performance to power gobbling turbo boost and here we are.

Just generally too much is made about node size. It isn't a magic pixie dust that you can sprinkle on a chip.

I certainly don't want to engage in chip advocacy wars, but it has been interesting seeing the goalposts moving. Last year it was just that Apple had some easily replicated, lazy big cache that was their key to performance, that's if you accept that it's actually high performance because Something Something Geekbench is a big lie and they're actual miserable performers. Now it's just node size. Super simple, trivial things that somehow discount what they're delivering.


> Last year it was just that Apple had some easily replicated

It feels like if you listen to the peanut gallery, Apple CPU advantage is always easy to replicate. They said the same thing when Apple bumped the A series to 64 bits and at pretty much every step between.

If Apple's formula was easy to copy or as obvious as everyone seems to claim, then we wouldn't be in the position where Apple has the best mobile CPU, the best wearable CPU, the best tablet CPU, and the best ultrabook CPU.


Apple always gets people riled up on here, it's one of the few areas on HN that discussions just get rather poor quickly.

> If Apple's formula was easy to copy or as obvious as everyone seems to claim, then we wouldn't be in the position where Apple has the best mobile CPU, the best wearable CPU, the best tablet CPU, and the best ultrabook CPU.

Exactly. It isn't easy to do. Apple is in a unique position to push their platform to the absolute limit because of their vertical integration. And while that sacrifices openness, most people buying laptops just don't care.


> And while that sacrifices openness, most people buying laptops just don't care.

This is it in a nutshell. I used Linux as my web development platform because I could immerse myself in Unix on both the server and client (Something you still can't do well in Windows FWIW). When OS X moved to Intel, suddenly I could live and breath something very close to my server side Linux setup on my desktop and have a fuss-free OS. Open is nice, but performance & just being able to get my job done effectively is much more important.


Multiple rounds of 20% improvement will still not give you the performance jump brought by M1. It takes tasks that were simply unthinkable on a light laptop and not only makes them possible but manages to beat something like a 10-core i9 [1], in a machine without a fan.

[1] https://arstechnica.com/gadgets/2020/11/mac-mini-and-apple-s...


Multiple rounds of 20% improvement like what we've been getting with iPhones each year?


How quickly we forget... the 10-core i9 was being embarrassed by AMD's Zen 2 mobile processors this spring.

https://www.notebookcheck.net/10-core-Intel-Core-i9-10900F-d...

Looks like single core Zen 3 scores about 2100 on Geekbench 5. M1 scores about 1700.

https://browser.geekbench.com/v5/cpu/singlecore https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q...

I'm confused why we feel like AMD Zen 3 has to "catch up" when it it's 29% ahead.

Yes I get it, per watt, battery life. The M1 is awesome for those metrics. But AMD chips have more "absolute" power. So you pick the one that is better for your needs - very powerful portability, or simply ultimate total performance (or something in between).


Those 2000+ scores are from hobbyist enthusiasts overclocking the Zen 3 CPUs, with elaborate liquid nitrogen cooling setups at the highest scores.

Stock Zen3 single core scores average around 1650: https://browser.geekbench.com/processor-benchmarks


There also appears to be an awful lot of them running MacOS which doesn’t really make sense (overclockers with bleeding edge Ryzen systems don’t have a lot of overlap with the Hackintosh crowd). Seems like it might be at least partially a data error.


What I'm trying to say is you'll not see a laptop that resembles a Macbook - silent, slim, light, with day-long battery life and top performance, with Zen CPUs anytime soon. It's simply outside the performance envelope. 35W vs 10W is a huge difference, even before you account for the slow cores / fast cores architecture vs frequency boosting.


https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

> We noted that although Apple doesn’t really publish any TDP figure, we estimate that the M1 here in the Mac mini behaves like a 20-24W TDP chip

It also looks like the 15W AMD Ryzen 7 4800U trades blows with the 20W Apple Silicon M1.

On the other hand, I agree - OEMs are just barely starting to warm up to AMD (so "Macbook-like" laptops are rare, the Huawei Matebook is a blatant copy, but it's not the original) and the M1 is more power efficient, enabling amazing battery life in laptops. I didn't contest that idea.

I just think there's a classic reality distortion field happening when comparing the Apple Silicon M1 to "all other CPUs" that, for many, veers rather far from the truth.


Patrick Moorhead is a stock analyst for the chip industry. In Wall Street terms, he is aligned with Intel others who are shaking in their boots right now. He's not going to say that the M1 is great until Intel/NVidia/etc. allow him to.

These pundits know which side of his bread is buttered. He knows that his clients were caught off-guard by the M1 announcement. He needs to give his clients enough time to sell their Intel stock before it becomes conventional wisdom that the legacy chip makers have a business model that doesn't work.

The history of the tech industry is dotted with pundits nay-saying anything new from companies they aren't in bed with. Eventually they can't deny reality any longer. Through the magic of the media's short-term memory they change their song and deny they've every said anything else.

I remember in the 1980s when the top pundit of the industry was John C. Dvorak. Everyone read his column. EVERYONE. The Amiga was a potential threat to John, who was aligned with the MS-DOS world. He wrote many columns about how multitasking was stupid because "your desk can't fit more than one keyboard and mouse". Yeah, that was his reason. Of course, once MS-Windows arrived, suddenly his column was about how multitasking is this new thing, the best thing, the thing everyone should have. If you don't have it, you're an ignorant loser. I remember telling my friends that I wish I had saved his old, anti-Amiga/anti-multitasking, columns because I want to show up at one of his public appearances to ask how MS-Windows allows you to fit so many keyboards on his desk.


Was it really that shocking for Apple to announce they were shipping their own silicon? That stuck me as a pretty open (non-)secret and I’m not even in that industry.

I doubt Intel was actually caught unaware.


It wasn't a surprise they were doing it. It absolute was a surprise for Intel that it ended up being this good.


But how can that be a surprise to anyone who has been following the development of the iPhone/iPad CPUs? The M1 is exactly what you would expect it to be by extrapolating from the performance of recent iOS chips.


I certainly wasn't expecting an M1 mac mini to compile our iOS app codebase in half the time compared to a 2018 super beefy MacbookPro.


This is more about trying to soften the blow on Wall Street. I'm sure Intel had a good idea Apple's chip was going to blow the doors off their offerings.


But I don't really see how this will affect Intel financially in any meaningful way in the short term.

They won't provide chips for Apple anymore, but that must have been a relatively small number of units, so no great financial loss to them.

And Apple are never going to sell these chips generally, so there's no competition there.

In the long term though it may actually benefit Intel (and AMD/NVidia) who might work out exactly what Apple have done and possibly replicate it in their own CPUs.


That assumes Apple won't be growing its PC market share.

Given the M1's benefits, the ability of Big Sur to run iOS apps, and Apple's marketing prowess, I'd be surprised if they don't double their market share in the next 3 to 4 years.

Then there's the (highly lucrative) server market.


> Then there's the (highly lucrative) server market.

Apple has flirted with the server market on several occasions. I could see a well priced Apple server with an M series CPU doing quite well if Apple did it right. That said, Apple has never really shined here and it's not really their strong point.

I'm sure MacStadium would love if Apple launched a blade server platform where you could just slot Mac mini logic boards in.


I could see Apple potentially licensing the design (minus IP critical to their MacOS workstation offering like the GPU or neural engine), to a big cloud provider, if the cloud provider does the heavy lifting on the Linux port (something Apple isn’t publicly talking about) you could see this become a big deal overnight.


While Apple's volumes are low, often their parts are mostly the higher end chips. I think just about every i9 laptop mention I've ever seen has been in a MBP 16". So while it's 10-15% of Intel's volumes, I've heard numbers as high as 40% of the profit.


I have no idea how much of Intel's profit can be attributed to Apple, but Apple definitely buys more higher end laptop CPUs from Intel than anyone else. That's going to hurt their bottom line.

I also expect Apple will be picking up some market share. How much remains to be seen. If price/ performance was your primary reason for avoiding Apple, it's gotten a lot harder to resist switching.


I doubt much of the market cares particularly much about price/perf. But the $700 impact does seem like an impressive value that can do a wide range of things not normally considered for this price point. Things like editing 4k video.


You basically described price for performance: it can do more at a lower price.


I agree in general, but is this really true?

> the legacy chip makers have a business model that doesn't work

It seems like this implicitly assumes that everyone switching to Apple silicon is a possibility. But it doesn't seem likely that all the android users are going to switch to iphones or that all the windows/linux users are going to switch to macs.

This is hugely embarrassing for Intel and Qualcomm in particular, and stocks will fall. But the business model outside the walled garden isn't necessarily going to change.


> It seems like this implicitly assumes that everyone switching to Apple silicon is a possibility.

Apple has flipped the tables. They went from being way behind the pack in terms of price/ performance pile in the 90s and 00s to the point where Apple's base MacBook Air is now competitive in terms of performance with laptops that cost twice as much.

I suspect they will absolutely pick up share. I also expect AMD will gain some share here as OEMs look to compete with Apple's offerings and Intel has come up short.

Also, by losing Apple, Intel is also losing one of their most profitable customers. While Apple isn't Intel's biggest customer in terms of units shipped, Apple tended to buy Intel's higher end CPUs which other OEMs avoided.

Everyone isn't switching to Apple, but the competitive scene is changing a lot and it doesn't favor Intel.


Also people are seriously underestimating the impact of being able to run iPhone/iPad apps.

My partner is now able to run dozens of Korean apps which never have had a Mac equivalent.


If one is living on US or comparable countries, maybe.

Over here Apple has to do much more to win market share over PC sales.


I'm not sure where "Over Here" might be, but I know in some places Macs are ridiculously expensive due to tariffs and whatnot. It's hard to account for regional differences.

Though I expect they will pick up share everywhere, just might go from 0.5% to 0.9%. I suspect everyone will benefit from the secondary effects.


It proves the viability of ARM for desktops, laptops, and probably eventually servers too. Intel having been humiliated by this will loosen their grip on OEMs who will now be under pressure in the inevitable rush to produce appealing, performance-oriented ARM laptops, as the current market has basically been Chromebooks and some half-hearted Windows ARM tablets, all based on chips adapted from phones.

And of course there will also be the rush on the fab side to produce a non-Apple chip that's actually suitable for these machines, which will in turn have performance characteristics far more suited to server use than any previous ARM chip, enabling that market and feeding back a demand for larger and more performant server ARM chips and mainboards— and who knows, maybe even some kind of socket standard so they aren't soldered down like it is today.


ARM is already pretty viable for servers. At least viable enough that most of the major cloud providers have arm64 based instances available.


I think there’s a qualitative difference here between viable (which is where we’re currently at) and utterly dominant (which is where I suspect we’ll be in five or so years).


I agree, but even still how does that affect Intel’s business model? Are they incapable of building ARM chips and selling them the same way as their x86 chips? Seems like the business model is fine but the product will need to change, no?


Sure they can, but at vastly lower profit margins.


Yup, because they'll actually be in competition with all the other ARM licensees, instead of just in a duopoly with AMD.


ARM started on the desktop.


But for the last twenty years, surely pretty much every linux-capable ARM part produced in excess of Q>1M has fallen into one of two buckets— either a) a device that runs on batteries, or b) an embedded, specialized processor such as for a router, vehicle infotainment, or home appliance.


That sums pretty much the actual usefulness of Linux itself.


Not sure. There are clearly some problems with the x86-64 ISA. If the world switches to ARM in the coming decade, that leaves Intel in a perilous position: no advantage in IP and no advantage in fabrication.


Right, the problem for Intel & AMD is not that everybody will switch over to Apple chips (they can't, because Apple likely won't sell it to them), but rather that Apple has proven beyond any reasonable doubt that ARM chips _can_ be this good when scaled up to laptop-level power envelopes.

That will probably prompt other ARM vendors and certain PC manufacturers (Microsoft who already has the ARM based SQ chips, for instance) to further invest in similar products that chip away at Intel/AMD's lead and likely eventually overtake them.


The world may want to switch to ARM but right now there aren't really any options to switch _to_ right? Beyond Apple I can't think of any chip manufacturers offering desktop ARM cpus...


I think it has less to do with everyone switching to Apple silicon and more to do with the clarification of viable business models: you either become an IP company and outsource production (AMD) or you go completely vertical and design and produce custom-designed SoCs to be used exclusively in your consumer products (Apple). The Intel model of designing and producing all-purpose commodity CPUs seems to have hit a wall due to both lack of demand and unanticipated manufacturing problems.


The (apparent) lack of demand for intel chips is because their manufacturing problems have disrupted their roadmaps. Because their 10 nm process is still not really working, and they were working on multiple releases assuming the process would be there when the design was, they're in a bad place.

From public statements, it seems like they've thought 10 nm was going to work soon for the whole time since 2018 that it's been in limited production. And they haven't really done substantial design work on the 14 nm processors (there's some releases, but it's mostly smashing more cores on there, and less architectural changes; afaik, coffe lake and comet lake don't have much in the way of ipc gains vs kaby lake). Also, they clearly need a new naming person.

It doesn't help that they killed their atom for phones program wheb they did. Right around the time Microsoft announced continuum for windows mobile 10, which would have been much cooler if it was running on an x86 chip (and if Microsoft didn't do a terrible job with Windows mobile 10 in general).


Atom underperformed too though.

There was a Lenovo Zenfone that shipped with an atom and even in Java apps or running the few Android NDK apps that shipped x86 APK's, they weren't fast.

Fun fact, Android used libhoudini (basically, reverse Rosetta2) allowing arm NDK apps to run (but they were pretty rough).


Samsung and Microsoft are the two to watch.

Both have followed Apple's direction and neither is going to want to let the PC market get away from them.


No, it’s not true. Most people buy their computers because of the price, style, and easily observable features (not necessarily performance, certainly not the type of difference that only shows up in benchmarks). Hacker News commenters are far from your average PC buyer. I’m sure the M1 and future desktop Apple silicon will have an impact, but it’s far from a killing stroke. If people only cared about maximum performance, budget machines wouldn’t sell at all. In reality, they sell in much higher volumes than high end devices.


The biggest game changer is the battery life, it's so good that feature alone will put it ahead of most of the competition.


And not all HN readers are won over either. I for one am not willing to switch to Apple's closed ecosystem for a chip that is a little faster. Or even a lot faster.


Up until two days ago, I was using a 6 year old laptop with a Celeron processor as my main home computer.

I think the whole Apple Silicon thing seems really interesting, but honestly, the only thing I do that is remotely taxing on decade-old hardware is load Facebook.


Eventually someone will build a laptop using chips like this and an acceptable OS.


Same here, I do use Macs at work, but at home is all Windows and a surviving Linux laptop.


Nor I, so I contributed to the patreon for a linux port to the M1.


Not getting hot or loud while lasting forever on battery are easily observeable selling points I’d say.


There are plenty of other laptops out there that don't get hot or loud and have all day battery life, though the M1 has multi-day battery life so fair point there.


You know, I would have sworn that Dvorak used to write a Mac column for an American Macuser or Macworld when I was a student in the 1990s, waxing lyrical about the Mac.


He was sometimes in the back page of one of those (I think MacUser) but just as ultimately wrong about everything as ever — for example, online shopping.


He did write for a while in one of those publications, but he hardly waxed lyrical. It was mostly vaguely negative “insights” about Apples business strategy and how awesome they could be if they just listened to him.


Fair enough, it was a long long time ago. I guess he hasn't changed. Pundits never need to be right.


Is Intel really so blind that they didn't see this coming or even have detailed inside info about it?

I would wager that they knew, but execs chose to protect their personal short term interests overt those of the company and its long term shareholders. This is typical modern US publicly traded company behavior.


What would have been the choice they should have made that was not protecting their short term interests and better for the company?


There's been comments here and elsewhere for a while saying that Intel's remuneration policy (i.e. salaries and benefits) have been uncompetitive and between that and the culture some of the best people have left (a number of them to Apple) so the answer's pretty obvious.

Throw money at it - find the best and brightest people and stuff their mouths with gold to have them solve whatever problems Intel's still having with 10nm and beyond as quickly as possible and with honest timelines to when that'll happen (not "oh it'll totally work this quarter").

Intel blowing the lead they had on semiconductor process technology is an existential threat, especially if it's blocking them getting out improvements in IPC (i.e. see what AMD are doing) also, so it's impossible to overstate how important this should be to Intel's senior leadership (by which I mean if it's not solved real soon, the shareholders ought to be removing them from post).


> I would wager that they knew

Same. It's pretty clear that they knew and those that were blindsided, earned to get a pink slip.


But unfortunately that's not how publicly traded companies operate. The highest paid execs just hang around for a few years, and almost regardless of how the company performs, they exit with a nice package and move into another company with a similar or better agreement.


Maybe you should read Andrew Grove's book. Intel used to be this way and I suspect they just intentionally made the easy choice, not the socially/business good choice.


This is nearly right - I don't think he's a stock analyst but he clearly gets some work from Intel, Dell etc - so hardly an independent reviewer (not sure Gruber would claim to be either fwiw) although he's also written this on Arm over the last few days.

https://www.forbes.com/sites/patrickmoorhead/2020/12/02/30-y...

In any event he could have led with "Why you might want to wait" rather than "Why you might want to pass" given that most of the criticisms are likely to be fixed soon.


Is that the same John C Dvorak who is the older and crotchetier of the "two crotchety old men" podcast who spend half of each show hyping their "Amway meets PBS meets LARPing" monetization scheme? That podcast would actually be listenable if they just ran some ads... Still, he has landed on his feet, even if he no longer claims any particular tech expertise.


Dvorak, IIRC was an OS/2 fan vs. Windows so..


I've never owned a Mac and don't intend to own one but I'm very happy with this M1 move, mainly for the reasons highlighted in the introduction to the article:

>M1 Macs embarrass all other PCs — all Intel-based Macs, including automobile-priced Mac Pros, and every single machine running Windows or Linux. Those machines are just standing around in their underwear now because the M1 stole all their pants.

I think it's a bit unfair to lump Linux in there since it's been running on ARM and other embedded, low-power devices basically forever, but it is true that few of us run our Linux desktop on an ARM board.

But I think the overarching point is true: now desktop and laptop makers can't just pretend that having a hot and/or slow, clunky, noisy x86-based architecture is just a fact of life. I hope and expect that it'll help create a new generation of ARM-based laptops and maybe even desktops that will run cool and smooth.

I want to believe.


> I think it's a bit unfair to lump Linux in there since it's been running on ARM and other embedded, low-power devices basically forever, but it is true that few of us run our Linux desktop on an ARM board.

We aren't running ARM on our desktops because there aren't ARM desktops for us to use. The market jumps from low-end boards like the Pi and Pine's lineups - which are excellent for what they are, but suffer from poor performance and I/O - to the high end where ex. https://www.asacomputers.com/Cavium-ThunderX-ARM.html looks like a nice system but $2,820 is a bit of a barrier to entry.


> We aren't running ARM on our desktops because there aren't ARM desktops for us to use.

Indeed, while ARM based Linux desktops hit the price and energy efficiency on the nose, they aren't going to wow anyone with performance.


Exactly! I would love to switch my systems off of x86, but I can't buy anything performant for a price that I'm willing to spend on hardware. I might yet try a Pi 4 as a desktop with USB for disk, but that's hacky and still suffers from performance problems. Or, I could get something with good performance, either POWER or ARM, but then I'd have to pay through the nose for the privilege. Now, some of this is market size: I get my x86 boxes at a huge discount by buying used, and high-end ARM/POWER machines aren't popular enough to really show up on eBay (yet). But in practice, there's just no good options.


The thing is not just "ARM", it's Apple's implementation. Their A* mobile cpu lineup destroys any competing mobile ARM CPU, and the M1 evolved from that. So just running ARM isn't going to save anyone.


The ThunderX is also, what, 4-5 years old now? There are no ThunderX3 desktops, are there?


It's not just "ARM is fast," though—Apple's ARM chips in particular beat the pants off their ARM competitors. (See benchmarks of any iPhone compared to flagship Android devices.) I don't think you should expect similar levels of performance from a run-of-the-mill ARM chip on the desktop (though I'm happy to be shown benchmarks to the contrary).


Probably because there's basically no competition in the ARM space, especially in non-mobile. I'd like to see what amd and intel can do with ARM before saying apple will continue to be king.


Qualcomm, Samsung, Cavium/Marvell, and nVidia don't count? Not to mention ARM's own cores...


They don't count in the PC space. For desktop PCs there's just the Mac mini and various hacker boards. For laptops there's the new MacBooks and the mediocre Qualcomm 8cx/SQ2 and nothing else.


For other ARM manufacturers, I wonder how much of this is a software problem.

I mean, what would the market be for a super-fast ARM laptop/desktop machine? It would run Windows RT, I guess?

How much better than Intel/AMD would the ARM machine need to be to justify buying it and running that? It seems like the M1's "magic" is not just the chip, but very much also the Rosetta thing.


> For other ARM manufacturers, I wonder how much of this is a software problem.

IMO, one of the key innovations Apple made in the M1 is a modification to the memory model so that it is much more efficient to translate x86 code into ARM code.


Windows RT doesn't exist anymore. Now its Windows ARM, which is basically 100% fully featured Windows with the exception that x86 apps have to be run in emulation and x86-64 apps don't run at all (yet).


No one said "in the PC space" here.

There's lots of credible ARM work in both the mobile and the server space. Generally, a lot of it would work OK in the PC space, but for the lack of software support. Apple has the ability to leapfrog that particular problem.

Still, Apple seems to have done pretty well compared to both the other mobile and server ARM vendors with their PC offering.


This whole article and thread are about the desktop (not server) space.

I do agree that Apple's push to ARM for laptops will have a dramatic impact on the server space in a year or two. Intel is in for a world of hurt unless they can pull a rabbit out of their hat.


Dunno, seems more like now. Amazon has arm instances, there's a bunch of arm systems in the server space already and the world's fastest HPC cluster is already arm.


There's Exynos.


No, there aren't any PCs running Exynos.


That's true if you consider Chromebooks not to be PC-class machines.


Exynos is dead.


Samsung stopped Exynos development a year ago. Cavium is basically dead (yes, bought out by Marvell). Qualcomm is interested only in mobile SoCs at mobile SoCs volumes where they can also sell radios, and Nvidia was traditionally not capable of delivering anything at promised power consumption.



Are you sure about Exynos? They announced the 1080 just a couple of weeks ago and the 2100 is expected next year.


This is from wikipedia (under "Exynos"), so apply the appropriate amount of salt:

On October 1, 2019, rumors emerged that Samsung had laid off their custom CPU teams at SARC.[21][22][23] On November 1, 2019, Samsung filed a WARN letter with the Texas Workforce Commission, notifying of upcoming layoffs of their SARC CPU team and termination of their custom CPU development.[24] SARC and ACL will still continue development of custom SoC, AI, and GPU.[25]


You misunderstood what that means. Samsung is continuing to develop Exynos SoCs using stock Arm CPU cores and AMD GPU cores. This will require fewer people hence the layoff.


Exactly; they do not have their cores anymore, they are going to use ARM's Cortex. So their SoC will be always average at best, by definition. They won't be able to make anything exceptional.


HiSilicon's Kirin chips use Cortex microarchitecture and are very competitive. I didn't follow recent releases, but I remember that the Kirin 970 was way ahead of the Snapdragon and Exynos chips at the time, both in terms of performance and power efficiency.


Huawei often releases new Kirin SoC on October with Mate series meanwhile Qualcomm reseases flagship Snapdragon 8xx on December, so Kirin outperform latest Snapdragon that released on last year.


Samsung stopped developing their custom cores. They will still continue making Exynos, but with reference design ARM cores.


Yes, and their SoCs will be as average as Cortex reference implementation can be. They won't be able to make M1-like SoC.


And Fujutsu as well, I think they are making the fastests chips at the moment, or at least the fastest systems with ARM.


Apple's ARM SoCs are definitely ahead, but it's not like the competition is hugely far behind. Top of the line Android phones and tablets perform well enough for most desktop tasks IMO. It's mostly the software and I/O that makes them unusable as such.


The numbers say otherwise.


The numbers show that the other chips are slower, but they don't show that they are "unusable".


Put it this way: if the new Macs had the performance of the fastest Qualcomm ARM cores available today, we wouldn’t be having this conversation now.


x86 chips aren’t “unusable” either, and if these other ARM chips wanted to compete in the desktop space they’d be just as - or more - “hot and/or slow, clunky, noisy”. ISA just doesn’t matter half as much as microarchitecture.


And those x86 are only still usable because M1 hasn’t been out long enough for the new software, that needs the speed to be “useable”, to be written.


5nm process would help a lot.


When iPhone came out in 2007, Android was still an OS for BlackBerry like devices. It didn't take long for Android to discover touch screens once Apple made it clear that they were dramatically better. Similarly, it's not like other ARM chip manufacturers are going to have to go through all of the same R&D expense to figure out how to make chips that are comparably fast--they just need to copy what Apple did without infringing on any particular patents. Even if they did have to spend the same R&D expense to get there, Apple has already derisked it. So they have a small fraction of Apple's expense and none of the risk, so I would fully expect to see M1 competitive chips come out in a few years' time. This isn't to say that they'll catch up with Apple completely; only that they'll be able to recoup a significant amount of the gap in a relatively short period of time such that there will be a similar breakthrough in the PC market (this is effectively what TFA is predicting as well, if that lends me any credibility).


> it's not like other ARM chip manufacturers are going to have to go through all of the same R&D expense to figure out how to make chips that are comparably fast--they just need to copy what Apple did without infringing on any particular patents

Very handwaivy but the devil is in the details of that "all they have to do". Copying success is not that straight forward or else AMD would have had a response to Conroe in 2007, Intel for Zen in 2018, and Qualcomm for A-chips around 2014.


In my opinion, Apple "derisked" high-performance ARM chips with the A9-ish era five years ago. Other companies have had a long time to spend catching up to Apple mobile chips, and they haven't.


That's a completely reasonable challenge, and I might be wrong. My very amateur speculation is that, in the mobile space, Apple got incrementally better over time. Rank-and-file Android users didn't really notice the performance gap and thus didn't demand faster chips. M1 changed expectations for the laptop market virtually overnight, and rank-and-file users will expect M1 performance, battery life, etc. I think this represents an enormous market opportunity for PC manufacturers and thus chip suppliers that didn't really exist circa A9.

My key assumption is that there's some pareto principle at play--80% of the M1's performance gains over the next best generic ARM chip are due to a handful of key innovations while the remaining 20% come from a long tail of incremental improvements. This would mean that generic manufacturers could copy those handful of key innovations (modulo IP rights, which might be another fatal flaw in my speculation) and recoup 80% of that performance gap in just a few years. If that assumption is bad and the performance is entirely an accumulation of lots of minor innovations over time, then I suspect it will take chip manufacturers quite a lot longer to catch up.


There's more to it than chip design though. There is the OS work to take advantage of the chip's new features. And designing new chip features to accommodate sw bottlenecks. Apple has very close collaboration between software and hardware engineering, while in the competitive space, those two things are not under the same roof. Makes it tougher.


True, and I don’t expect generic manufacturers to completely close the gap between them and Apple, but I do expect there will be enough to copy that we’ll still see a step change in PC performance.


Qualcomm has been unable to compete with Apple's ARM phone processors for years. It's gotten to the point that the only processor that comes close to a new Apple SOC on benchmarks is Apple's previous SOC. Why do you expect the PC market will be different than the phone market?


Because now Qualcomm just use (or modifying a bit, like cache capacity) Cortex-A CPU core design. Both Intel and AMD designs own core.


> When iPhone came out in 2007, Android was still an OS for BlackBerry like devices.

When iPhone came out in 2007, Android did not even exist. [1]

[1] https://en.wikipedia.org/wiki/Android_(operating_system)


How is that any true? From the article:

> An early prototype had a close resemblance to a BlackBerry phone, with no touchscreen and a physical QWERTY keyboard, but the arrival of 2007's Apple iPhone meant that Android "had to go back to the drawing board".


Your own source contradicts you.


I'm not sure how. iPhone was available in June 2007. Android went into public beta in November 2007, and reached 1.0 and was first released in a commercial device in September 2008. [1]

I guess Android, or what morphed into Android, was probably being developed, in a pre-beta state, when iPhone was announced in January 2007. Okay. It sounded to me like someone was saying that Android was released on Blackberry-like phones at the time iPhone came out. That was only thing i was objecting to. Sorry, a small point I almost didn't even bother to make, and probably not helpful to anybody, anyway.

[1] https://en.wikipedia.org/wiki/Android_version_history


If it were so easy, why are all of the other ARM manufacturers years behind?


To be clear I'm not talking benchmarks, I have no idea what the raw numbers are. I'm talking about my personal experience using ARM SoCs. Specifically I've worked with Tegra chip and they're powerhouses for instance, I could definitely see myself do my day-to-day development on such a chip.


Do they? Because i have yet to see any numbers that aren't clearly the result of a massively streamlined SoC. Yes the ARM chip is great, but the RAM is blazing, tiny, and more close to the ARM chip than anything out there for x86, right?

I'll be interested to see if Apple can keep this up with 128GiB of RAM and a beefy GPU.

So far to me M1 is great, ARM is great, but comparing a tiny amount of RAM soldered onto the CPU compared to flexible and huge RAM potential devices seems dishonest. Unless Apple manages to squeeze 128GiB of RAM on their chip, at least.

I want to see what my Blender workstation looks like on Apple's Architecture. Where i purpose built a huge CPU, GPU, and RAM for Blender rendering and large scenes. I suspect Apple will start seeing huge falloffs in the SoC design under larger workloads.

Apple is dominant in making phones though. And this Laptop is designed like a Phone. So yea, it's amazing for that.


Architectural analysis of the chip [0] suggests it is a massive improvement as a general purpose CPU, not just because of Apple's vertical integration and streamlining.

The magnitude of extra ROB entries, having an 8x wide instruction decoder, that massive L1I cache, and such fat cores with lots of ALU and other op units while simultaneously keeping over a 3GHz clock is a phenomenal architectural improvement overall.

Apple may not need to add more RAM locally. Other silicon designers like IBM with POWER10 are moving to serial memory with massive "L4" caches, seen previously with their Centaur memory controllers. I can see Apple using OMI or a similar serial memory bus to add extra memory, using 16/32/etc GB of PoP RAM for caching reads and writes to slower off-chip memory. I expect Intel and AMD will eventually move to a similar memory model, at least in server grade hardware.

[0] https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


Yea and to be clear i'm not saying it's not awesome. I'm just saying that a lot of things - like real world benchmarks of programs/etc - massively benefit from the proximity and speed of the RAM. Proximity seems to be a very difficult beast to scale. Works great in small scale, horrible in large scale.

Sidenote, even if Apple rolled out 128GiB tomorrow i probably couldn't afford it. As the OP says:

> including automobile-priced Mac Pros

That's rough.


Hypothetically, with these new memory architectures, you could get the best of both worlds. Best case, you get a huge speed up if everything fits in the high-speed RAM, worst case, you fall back to the "old" RAM.

Side note, while Apple charges ridiculous prices for their upgrades, Mac Pros are actually some of the more ""reasonably priced"" computers comparatively. I've looked at HP and Lenovo's equiv. workstations, and you pretty much get screwed by everyone at that scale.

My desktop can take up to dual 128 GB sticks but one stick costs over $1,000, so I'm stuck with my 32 GB stick :(

... OTOH, it's pretty insane you can get 128GB of RAM for $1,000 nowadays. I remember when 8GB was a lot and cost several hundred. Time and miniaturization go on I guess...


> RAM soldered onto the CPU

It's just very close to the CPU, on the same package. It's not on the same die or anything. There are laptops that already have a shorter physical path to RAM than the M1 MacBooks, stacking the RAM right next to the CPU is common.

So it's just a matter of time before >16GB is available with an M1. These are just entry level machines in the first round.


> It's just very close to the CPU, on the same package. It's not on the same die or anything.

Yea, i was just simplifying. Though in SoC i think that's a perfectly fine simplification lol.

> So it's just a matter of time before >16GB is available with an M1. These are just entry level machines in the first round.

Think there will be > 32GiB?


Almost certainly, although maybe not immediately.


I know I'm probably older than most people here, but the notion that 16 GiB is "a tiny amount of RAM" is tremendously amusing to someone who paid $740 in 1986 (with a developer's discount) for 2 "MB" of RAM (we didn't have MiB back then)...


To be fair, we sort of still don't have MiB even now.


Let the Pro devices come out next year.


It depends on what you're comparing: A14 on phones have a Multi Core score which would be quite comparable to Snapdragon 888 (extrapolating a 20% boost over SD865) that was just announced all while A14 being clocked higher (3.0ghz 4 core vs 2.84 x 1, 3 x 2.4) comparing performance cores. Single thread OTOH Apple wins comfortably but I would suspect X1 to reduce the lead a bit. As for AI performance Apple claimed 11 TOPs, while Qualcomm claims 25 TOPs


A14 has two performance cores.


> But I think the overarching point is true: now desktop and laptop makers can't just pretend that having a hot and/or slow, clunky, noisy x86-based architecture is just a fact of life. I hope and expect that it'll help create a new generation of ARM-based laptops and maybe even desktops that will run cool and smooth.

Be careful what you wish for. X86 is the most open architecture right now. ARM, on the other hand, has locked bootloaders, messy device trees and so on. You can see that virtually every Linux distro running on an ARM SOC needs some specific tweaks.


> X86 is the most open architecture right now.

x86 is not "Open" in any fashion. You can't license x86 even for a fee, let alone free.

While ARM isn't open either, it is by comparison far more available to license at reasonable cost.

> ARM, on the other hand, has locked bootloaders

It also has open/ unlocked boot loaders. One of the advantages of licensable technology like this is you can have a lot of different architectures with the same underlying core. With x86, you get whatever the Intel/ AMD duopoly decide to ship you.


> x86 is not "Open" in any fashion. You can't license x86 even for a fee, let alone free.

OP is not talking about being able to manufacture x86 CPUs kind of open, that's useless for most folks.

They're talking about being able to install any OS, drivers being fairly universal, no device specific out of tree kernel patches for the vast majority of PCs, no device trees etc.


It's a lousy use of the word "Open" in a discipline where Open has a fairly specific meaning.

I mentioned that as well though. Using your goofy definition of Open, x86 is no more "Open" than ARM. Pinebook and Raspberry Pi both use no bullshit ARM setups. With ARM, how accessible the architecture is is 100% dependent on the implementation which vary greatly by manufacturer.


> x86 is no more "Open" than ARM. Pinebook and Raspberry Pi both use no bullshit ARM setups.

They're ARM SoCs, meaning that you can't just download the Ubuntu ARM ISO[1] and run it on either device like you could with an x86 ISO and an x86 machine.

To run Ubuntu or any Linux distribution, you need to use images that were specially crafted for either SoC.

This is because ARM SoCs use custom bootloaders and don't have hardware on enumerable buses. Each SoC model is essentially a unique hardware configuration that needs an individual Linux port.

ARM servers are more similar to x86 machines in that they use UEFI and have enumerable buses.

As long as machines are shipped that use ARM SoCs, Linux support for all of them will be quite difficult to achieve[2] compared to x86 machines.

[1] https://ubuntu.com/download/server/arm

[2] https://elinux.org/images/a/ad/Arm-soc-checklist.pdf


While Apple likely has no motivation currently to make a UEFI-based ARM computer, there's nothing that prevent anyone else from doing so. UEFI is not x86-specific.

(I also think it's a bit too early to say definitively that no ARM-based Apple computer will ever be able to boot into Linux. I don't think Apple has any motivation to make that easy, but that may not be the same as making it impossible.)


There is also nothing pushing OEMs to do this and in the decades we have had ARM SoCs we have not seen this happen.


Windows Mobile/phones actually had UEFI bootloaders with Windows Boot Manager [1]. For some Windows mobile devices, people have managed to unlock the bootloaders and load other EFI applications.

[1] https://docs.microsoft.com/en-us/windows-hardware/drivers/br...


UEFI can be just as closed as custom non-UEFI bootloaders. Windows Mobile devices were UEFI, but they had Secure Boot enabled making it almost impossible to boot non-OEM signed images. (Un)fortunately, their Golden Key was leaked allowing experimentation with some Windows Mobile devices [1].

[1] https://www.xda-developers.com/microsofts-debug-mode-flaw-an...


It's cheaper to just design a ARM SoC with a custom bootloader.

> I also think it's a bit too early to say definitively that no ARM-based Apple computer will ever be able to boot into Linux.

There's a big difference between a SoC running a Linux fork and actually running Linux distributions in the same sense that we can on x86 computers or ARM servers.


> Using your goofy definition of Open

If I buy any x86 PC, chances are I can install mainline Linux on it. Picking up any particular recent ARM SoC and the chance of doing so is much, much less. The M1 included, after all.

> Pinebook and Raspberry Pi both use no bullshit ARM setups.

Both of these took considerably longer to get mainline support that your average x86 hardware needs.


> Both of these took considerably longer to get mainline support that your average x86 hardware needs.

Not sure what you are thinking here. It's taken 25 years so far and we still have tons of x86 hardware which isn't supported well in Linux. There is a reason Dell ships a specific laptop for developers with Linux pre-installed.


> There is a reason Dell ships a specific laptop for developers with Linux pre-installed.

And yet, even Windows DELL laptops are more likely to run mainline Linux than any average ARM SoC, even including the ones targeted at developers.

I am no fan of x86, but trading what we have now for an M1 is a net loss in terms of our ability to install anything we want as users, no question.


Consider for a moment how many independent developers rely on x86 based Linux systems versus ARM. x86 isn't open, Linux's support for it is a symptom of Intel's monopoly.

> And yet, even Windows DELL laptops are more likely to run mainline Linux than any average ARM SoC, even including the ones targeted at developers.

This is more about inertia and developer resources than openness. The more developers you have on ARM, the faster device and software support will come.

> I am no fan of x86, but trading what we have now for an M1 is a net loss in terms of our ability to install anything we want as users, no question.

You've changed the question here. We were speaking about ARM in general, not the M1.


Well there's two issues there. One is if the bootloader is open, which it is.

The second is if there's linux port for that hardware, which it isn't. There's a project underway, seems likely that the CPU (which is still AARCH64) will work, and likely networking. However to make a usable desktop you'll need a GPU which is much harder.


> Raspberry Pi both use no bullshit ARM setups.

Seriously? You need a blob from Broadcom to even boot the thing, and you call that a no bullshit ARM setup? I am astounded.


Compared to the amount of bullshit we had to to deal with in the 90s and 00s getting Linux running on a laptop, it's a walk in the park. Often it was a choice between mediocre FLOSS drivers or binary blobs from 2-3 private parties... when things even had drivers at all. It's been a while, but how many laptops lack wifi support or require proprietary drivers to work now?

Are the FLOSS drivers for Nvidia's cards great now? How many years have people been hacking on that project to make them work as well as they do?

What people are describing as "Open" is actually an enormous amount of work on the part of developers and the grudging support of hardware vendors after decades of struggle.


> It's been a while, but how many laptops lack wifi support or require proprietary drivers to work now?

Very few actually.

> Are the FLOSS drivers for Nvidia's cards great now?

Don't shift goalposts, we are talking about CPU support, not GPUs... GPU support on ARM is anyway not that great to begin with.

> What people are describing as "Open" is actually an enormous amount of work on the part of developers and the grudging support of hardware vendors after decades of struggle.

Intel has Linux dedicated teams at least, pushing FOSS code every single day. Most ARM licensees could not care one bit about FOSS.


> Don't shift goalposts, we are talking about CPU support

Linux already supports ARM CPUs and has for some time. The challenge is drivers. Getting video/ wifi/ etc running is the part that makes devices actually useful. The Broadcom Blob the poster I replied to was referring to is not for supporting ARM. Very similar to the Nvidia blob many people install (often unknowingly) when they install Linux on a system with Nvidia.

> Intel has Linux dedicated teams at least, pushing FOSS code every single day. Most ARM licensees could not care one bit about FOSS.

Intel didn't give a shit about Linux for how many years? Companies don't care about FOSS until it has potential to affect their bottom line. It was only after Linux servers were running half the Internet, that Intel stepped up and started supporting Linux.

I'm guessing you missed the first couple decades of Linux adoption. Linux on ARM is just a fast motion repeat of the early days of Linux on x86.


The is a free one: https://opencores.org/projects/zet86 (I don't know how complete/production grade is it).


I am pretty sure the most open at the moment are OpenPOWER and RISC-V, while there is is MIPS I am sure how open it is.


Can you buy a desktop with those right now?



> POWER

at prohibitive costs. So not really an option...


They don't seem that expensive. They're high-end machines themselves, and their pricing seems on-par with high-end x86 workstations, like the Dell Precision series.


> Be careful what you wish for. X86 is the most open architecture right now

You might want to take a look at PowerPC/OpenPOWER.

ARM not being very open is mostly because there's been limited development interest in it, aside from specialised devices, and those companies have no real need or interest to have open things. Doesn't mean ARM itself can't evolve past it.


> ARM not being very open is mostly because there's been limited development interest in it, aside from specialised devices

That's an extremely weird thing to say...


I know very little about this space, but presumably that will change if/when a significant portion of the laptop space (and probably the server space as well) moves from x86 to ARM? It might be painful for early adopters for a few years, but I would expect this stuff to get ironed out in time, no?


No, that ship has sailed. Anything new that comes now can be built according to the expectations smartphones have set over the past decade and a half. Closed systems that are treated like disposable items. Even PCs in their mobile incarnation are now far less open and flexible than they used to be.

The reason PCs are still open is historical, they started like this and people can't really see PCs any other way but they're a dying breed and whatever comes after is a different game with different rules. We'll have some niche manufacturer building something more open for the small segment of enthusiasts.


We've basically got one foot though that door already.

> We'll have some niche manufacturer building something more open for the small segment of enthusiasts.

I would argue that's exactly what Raspberry Pi and Arduino is.


The raspberry pi requires a very proprietary and custom boot process. Nothing can start until you first load the GPU driver blob on to the SoC.

This is why you need rpi specific distro builds.


"Developer Mode" exists in many new devices, eg. Chromebooks, Oculus Quest, Android phones.


There's a difference between swapping or adding any of the many standardized components you want or installing pretty much any operating system you can think of on your PC, compared to enabling "developer mode" on an Oculus Quest.

The fact that so many people casually say how a platform is open or flexible because if you jump through enough hoops you get to make some minor modifications to your device, or applauding the openness of a system for allowing you to sideload standard apps is the reason manufacturers don't see a need to propagate the PC philosophy anymore. Too many users grew up with this new model so they don't see the loss.

None of the devices you mention are user serviceable in any way hardware-wise, the software is almost always locked onto the device and there's no easy, or even supported way of changing it. And if you do change it you just get a small variation of the same software. Your phone may get a slightly degooglified version of Android but it's unlikely you'll get it to run Ubuntu, Windows, or a FreeBSD without a massive investment in time and knowledge.


My understanding is that developer mode on Android and Chromebook will unlock the bootloader. Software wise, that's about as user-hackable as you can get. It's true that the Oculus Quest is not as open.

I don't expect or even really want to swap hardware on these devices: if that was a priority for me I would get a PC, Arduino, Raspberry Pi, etc. I get the desire to tinker and customize, but from where I'm sitting I have plenty of opportunity to do that.

I also don't expect the device manufacturers to do the work to make Ubuntu or FreeBSD run. They didn't do that for PCs, the community did.


Sadly, I find this persuasive.


The apple M1's don't have locked bootloaders, there's a warning in the firmware about enabling unsigned booting, but it's just warning. That's why there's an attempted linux port underway.


> messy device trees

Can you please elaborate?


> * the x86 world resolves around standards such as ACPI, BIOSes and general-purpose dynamic buses. > * ACPI normalises every single piece of hardware from the perspective of most low-level peripherals. > * the BIOS also helps in that normalisation. DOS INT33 is the classic one i remember. > * the general-purpose dynamic buses include: - USB and its speed variants (self-describing peripherals) - PCI and its derivatives (self-describing peripherals) - SATA and its speed variants (self-describing peripherals)

> exceptions to the above include i2c (unusual, and taken care of by i2c-sensors, which uses good heuristics to "probe" devices from userspace) and the ISA bus and its derivatives such as Compact Flash and IDE. even PCMCIA got sufficient advances to auto-identify devices from userspace at runtime.

> so as a general rule, supporting a new x86-based piece of hardware is a piece of piss. get datasheet or reverse-engineer, drop it in, it's got BIOS, ACPI, USB, PCIe, SATA, wow big deal, job done. also as a general rule, hardware that conforms to x86-motherboard-like layouts such as the various powerpc architectures are along the same lines.

> so here, device tree is a real easy thing to add, and to some extent a "nice-to-have". i.e. it's not really essential to have device tree on top of something where 99% of the peripherals can describe themselves dynamically over their bus architecture when they're plugged in!

> now let's look at the ARM world.

> * is there a BIOS? no. so all the boot-up procedures including ultra-low-level stuff like DDR3 RAM timings initialisation, which is normally the job of the BIOS - must be taken care of BY YOU (usually in u-boot) and it must be done SPECIFICALLY CUSTOMISED EACH AND EVERY SINGLE TIME FOR EVERY SINGLE SPECIFIC HARDWARE COMBINATION.

> * is there ACPI present? no. so anything related to power management, fans (if there are any), temperature detection (if there is any), all of that must be taken care of BY YOU.

> * what about the devices? here's where it becomes absolute hell on earth as far as attempting to "streamline" the linux kernel into a "one size fits all" monolithic package.

> the classic example i give here is the HTC Universal, which was a device that, after 3 years of dedicated reverse-engineering, finally had fully-working hardware with the exception of write to its on-board NAND. the reason for the complexity is in the hardware design, where not even 110 GPIO pins of the PXA270 were enough to cover all of the peripherals, so they had to use a custom ASIC with an additional 64 GPIO pins. it turned out that that wasn't enough either, so in desperation the designers used the 16 GPIO pins of the Ericsson 3G Radio ROM, in order to do basic things like switch on the camera flash LED.

> the point is: each device that's designed using an ARM processor is COMPLETELY AND UTTERLY DIFFERENT from any other device in the world.

https://lists.debian.org/debian-arm/2013/05/msg00009.html


ARM UEFI systems exist. There are now the norm on servers along with ACPI. Even some embedded devices use UEFI. There are other standards to address those various points.

This post is so outdated, that ARM servers' companies had the time to have everything, then still die (most of them). And then revive with the AWS Graviton2 and other Neoverse chips.


It's not outdated, it holds true for ARM SoCs. Consumer ARM devices are ARM SoCs, and not server hardware. It's cheaper and easier to design custom SoCs than it is to build consumer hardware that conforms to the standards that ARM servers do.


> I want to believe.

If we can end up with superfast, supercool ARM-based Linux laptops, that'd be a huge deal :)


This is it. I'm excited because Apple just showed what is possible. Other companies are going to have to try to match it.

I have wanted a powerful "enough" fanless laptop for many years. I got excited about 4 years ago with Intel's short-lived M line and I bought an Asus ux305. It served me well for front end web dev (had to use a cooler stand under sustained high load though). Great Linux support and battery life (after installing tlp). I remember feeling a bit of awe that it was reasonably snappy with no fan, but it was not quite there. I can only imagine the feeling is an order of magnitude greater with the M1.

I don't own any Apple products, but I still hope that raising the bar will benefit me later on. I'll be pretty pumped when this level of hardware can run a mainline kernel.


>Other companies are going to have to try to match it.

This actually worries me.

Lately, anytime I look outside the Apple ecosystem, I wonder to myself what all these companies are doing... sure, I could see AMD closing the gap, but Apple's made additional choices for integration beyond what AMD would do alone.

I guess my summary would be: other companies trying to "match" Apple hasn't really gone well in the past decade, short of a few select standouts.


I've had same desire for a lightweight, fanless laptop for many years, also had (and still have and use) an Asus ux305, which I used for Linux development.

My main use machine now and for last couple years has been a Google Pixelbook. Lightweight (2.4 lbs) and fanless, solid 8+ hours on a charge, great for general use on chromeos, okay for Linux with built-in Crostini container.

I've had an iPad, but have never had any other Apple product. I've been pretty much anti-Apple since 1979, when as a 15-year-old I bought a Radio Shack TRS-80 over the much more expensive and glitzier Apple IIe.

I now have a Macbook Air M1 on order. We'll see how it goes. I'm sensitive to weight, prefer very light laptops. At 2.9 lbs it will be a noticeable step back from the ux305 (2.6 lbs) and Pixelbook (2.4 lbs). If I end up not bonding with the Macbook I plan to sell it; Apple's generally high resale value is a factor in giving the M1 a try.


Let's not forget the super long battery.

While I dislike apple and have never owned any of their devices I can't help but feel thankful to them because now that we know this is possible soon others will re-create their success and we will hopefully have the same perks on a more open platform


Someone needs to spend the time/ money to design this M1 killer CPU. There is very little money in delivering powerful Linux laptops.

ARM based Linux server CPUs? Very likely. Low end/ Chromebook style laptop ARM CPUs, Already exist.

Performant ARM CPUs designed for laptops? I expect you just gotta hope Windows on ARM takes off or start porting Linux to M1 Macs.


Work has already begun to port Linux to M1 Macs. Linux for ARM and Windows for ARM are already running as PoC on the M1 Macs with QEMU (running ARM-on-ARM).

https://www.techradar.com/news/you-can-now-run-linux-and-win...


I suspect people interested in "superfast, supercool ARM-based Linux laptops" are looking more for running on bare metal, not in a VM.


Linux on M1 bare metal is also in the works.


Please read the first sentence again. Porting is underway. It's already running under ARM to ARM emulation, but porting is underway.


If Linux can be made to run on M1 bare metal, I expect it will. And computing history shows us Linux can be made to run on anything ;)


I expect (very much hope) it will. I haven't contributed to the Linux on M1 fund yet, but I am seriously considering it, particularly if I get an M1 Mac. I likely wouldn't run Linux until after Apple retires MacOS for the M1, but want the option.


> Performant ARM CPUs designed for laptops? I expect you just gotta hope Windows on ARM takes off or start porting Linux to M1 Macs.

What makes you think Windows won't take off on ARM?

In any case, if it can be done -- i.e. if there aren't locked down specs -- you can bet Linux will be ported to M1 Macs.


> What makes you think Windows won't take off on ARM?

I have no idea if it will or not. Microsoft's current strategy on ARM seems pretty unlikely to succeed though.

> In any case, if it can be done -- i.e. if there aren't locked down specs -- you can bet Linux will be ported to M1 Macs.

I'm confident (and hopeful) we will see an M series Linux port. How good it will be without support from Apple is questionable. The current state of Linux on Mac doesn't exactly inspire hope.


> Microsoft's current strategy on ARM seems pretty unlikely to succeed though

I'm not familiar with Microsoft's current ARM strategy, but they can always pivot of it turns out it's really the future. I must reluctantly admit Microsoft has been capable of some pretty surprising and welcome changes in its history.


But that won't happen unless there's a Windows ARM build that does everything that Windows does in x86: the market for Linux-only laptops is pretty small.


> the market for Linux-only laptops is pretty small.

but growing more today than it was a few years ago because of the prevalence of Chromebooks in education. Many of which run ARM CPUs!


Most recent Chromebooks are intel-based these days.


Indeed, at least they are running linux though.


What makes you think, if ARM becomes the hot new target, that there won't be a Windows ARM laptop?


Yeah, that would be really amazing. Is it even possible to install Linux on this new Mac though? Probably not. Will Apple offer these chips to other manufacturers? Would be cool if we could build small form factor PCs with these M1 chips!

I also wonder about the driver situation. Intel hardware has excellent support on Linux, everything works out of the box. Apple probably has zero interest in contributing drivers to the Linux kernel.


Not at the moment but probably some time next year: https://www.patreon.com/marcan


You are correct. The answer is no Linux on M1.


Contribute to the Linux -> Apple M1 projects, I have.


Not going to happen anytime soon.


It's ok, I'm in no hurry: my current Intel-based Linux laptop works great.

I can dream of an even better one, though!


If we can end up with superfast, supercool ARM-based Linux laptops, that'd be a huge deal :)

Apple showed Debian running in a VM on an M1 prototype Mac nearly 6 months ago.

I get that it won't be acceptable for some Linux diehards but the superfast, supercool ARM laptop running Linux will soon be an M1 MacBook Pro.


Apple laptops have always been among the best in terms of quality. If you can run free software on it, even better.

Is it actually possible to replace macOS with Linux though?


Apple hasn’t locked down the Apple Silicon Macs to prevent a non-Apple OS from being installed, so maybe?

The main issue is driver support, since graphics, networking, etc. is all Apple proprietary hardware.


If I can get it to run Linux not on a VM, but natively and without coexisting with macOS at all, then it'll be good enough for me!...

...except I cannot afford a macbook. The one I'm using now was paid for by my employer.

More generally, I'd love to see fast ARM platforms not built by Apple.


I don't think you really pay that much extra for an Apple machine, especially for the M1 ones. I'm afraid that when/if fast ARM alternatives arrive, they will probably cost 80-100 % of what the MacBooks cost. So basically, if you can't afford a MacBook today, you won't be able to afford a comparable high-end ARM offering from, say, Dell.


I honestly don't know about the M1's, but I've never liked the price/specs ratio of Intel macbooks. While it's true their build is solid and they look good, they always seemed overpriced to me regarding specs.

And even the build quality... a friend who's an Apple fan used to tell me "they are better quality, they have less problems" yet his macbooks seem to break down at about the same rate as my Dell, HP or Lenovo laptops, so...

I've decided macbooks look good, have good quality screens and the best touchpads. I'm unwilling to pay a premium for that, and I don't even like OS X/macOS...

> I'm afraid that when/if fast ARM alternatives arrive, they will probably cost 80-100 % of what the MacBooks cost.

Possibly! I wouldn't buy them when they are the latest new thing, either.


One big difference is that MacBooks (and Apple products in general) retain their value far better than any other computer brand. If you're looking at the used market to save money, a Dell looks way cheaper because it's depreciated far more than the Apple machine.


To be honest, I've never ever resold a laptop. Ever. I end up giving old laptops to friends or family who can't afford a new one. So resale price is a non-issue to me.


resold once, neither the buyer nor me was happy with the deal. family and friends it is


...except I cannot afford a MacBook

You know, the M1 Mac mini is $699…

And since you can attach up to 6 displays to it (or the MacBook Pro), you'd never have to look at macOS if really didn't want to. ;-)

https://www.macrumors.com/2020/11/24/m1-macs-able-to-run-six...


But the M1 Mac mini is not a laptop. And in any case, if those are US prices, count about 4x in my country.


Local compensation practices where you live must really suck if you can't afford a $1000 laptop. :(


I feel this is not a very thoughtful comment. There can be many more reasons for not being able to afford a $1000 laptop other than local compensation right? Financial constraints like kids, mortgage, lack of spare cash for a new machine etc.?


I don't live in the US and I must politely ask you to see how the rest of the world lives.

The macbook I was given costs, after customs, taxes and everything, $4000. Not $1000. Are there cheaper macbooks? Sure. Why would I want them, though?


I'm sorry if my post seemed rude in any way.

First, this discussion is about the new MacBook Air that costs $1000 US which is why I mentioned that price point. I suppose tariffs and such would increase that substantially, so that's a fair point. My bad. That said, I believe there was discussion about the hypothetical situation where you did want a Mac.

More to the point, I checked your post history before I commented, and I gathered you are a developer, somewhere in South America (?). I suspect that the code I write is probably no better than the code you write. But your pay is presumably quite a bit less than mine. My point was that it's unfortunate that like work doesn't yield like rewards.

Now here I am going to make some assumptions... In all likelihood, people are amassing wealth off of your work in the same way they profit from mine. But due to local wage norms, they can pay you less and reap higher margins. They get to upgrade the yacht, but you might not even have enough free time time to use a boat even if you could afford one.

It's possible that I extrapolated too much and that your situation is nothing like what I have inferred. Sorry in advance if that is the case.


>Those machines are just standing around in their underwear now because the M1 stole all their pants.

Not sure what shitty machine hes talking about, but my threadripper's are not impressed with some 4 core M1.


He's talking about the vast majority of PC chips sold today, who's power consumption / performance / thermal ratios just got marmalised.

You're quite right, your machine and in fact everyone else's are still just as good as they were a few weeks ago. I got an Intel 5K iMac in the summer and I'm still very happy with it. I need to be able to run windows and Linux in VMs so M1 isn't for me right now. It's no for you either.

Having said that, your manufacturer of your Threadripper is now on notice. Apple isn't done with the M1, more powerful desktop chips are on their way over the next few years and there's a very good chance they'll do exactly what they just did to Intel and AMDs mobile chipsets to the high end stuff. Apple are coming for them, and not being impressed now isn't going to serve them well down the line.


Sure, and when they do, we can have that conversation.

But claiming that what’s available today from Apple is of great interest to the high performance market is jumping the gun.

There’s also the fact that cost (including peripherals and support) and software availability, not performance and efficiency, are what guide most PC purchases. The M1’s advantages are far less pronounced in those areas.


> But claiming that what’s available today from Apple is of great interest to the high performance market is jumping the gun.

I think that misses the point that most users (and most software developers in particular) use laptops. The M1 SOC pretty well kills everything CPU-wise, and does graphics as well as a GT 1050 discrete GPU. It also throws in AI silicon and RAM, and runs all of it in a 20W (or so) power budget.

Single-thread performance is quite comparable to a Threadripper.

It'll be interesting to see how things look when M1 style silicon is introduced for the 16" Macbook Pro (guessing 8 performance cores, 32 GB RAM), and the Mac Pro (guessing up to 32 performance cores, and at least 64 GB RAM used as cache for a potentially much larger main memory).

> There’s also the fact that cost (including peripherals and support) and software availability, not performance and efficiency, are what guide most PC purchases. The M1’s advantages are far less pronounced in those areas.

Actually as to cost, it's highly competitive in the premium laptop segment (like Dell XPS for instance).

Software availability is good, since it'll run almost all Intel Mac software, all M1 Mac software (soon to be ~everything), and all ARM-compatible Linux software (ARM Linux runs in a VM).

The only thing missing is Windows, and that may well be temporary.

I'd say if you're doing web, Mac or Linux software development the Apple Silicon Macs are the wave of the future! :-)


I wonder does Apple really make real "Pro" line Apple Silicon for Mac Pro that competitive to 32-64 core Threadripper. Is it profitable?


Before the M1 I worried about that. Now I don’t. The M1 is the lowest performance, lowest end Mac chip Apple will ever make, and it’s already knocking on the door of premium Intel and AMD chips. Not the super-premium, highly multicore stuff like threadripper, buts it’s a lot closer than anyone expected in single core.

To me, this means that when Apple consciously goes after the high end desktop market, they are likely to overshoot in exactly the same way M1 overshot the mid range mobile space. That could put Apple’s high end iMac chips in Mac Pro territory anyway.


I'm the other side. Vertical integration is a fast way to fall behind.

This move is very Nintendo-like. Gimmick then don't even attempt to compete with the big dogs.

The fanboys have their meat to chew on "it's not hot". And they will repeat it on command as Apple falls behind to a swath of competition. "It might be slow, but at least it doesn't warm my lap."


Nintendos profit forecast for FY2020 is 150% of Sony's gaming division.

Apple just dropped hardware which crushes every comparable competitor.

But to you, Nintendo is not competing by "making more money", and Apple is "falling behind" by ... jumping years ahead? This reads like personal axe-grinding that is detached from any objective link to the market.

https://www.cnbc.com/2020/10/28/sonys-second-quarter-profit-...

https://www.cnbc.com/2020/11/05/nintendo-raises-switch-forec...


The third most valuable company on the planet's long term business model has been a "fast way to fall behind", and is not "even attempting to compete with the big dogs"?

Well, they just pantsed Microsoft at their own ARM laptop game (#2 most valuable) and the #1 big dog is, er, Saudi Aramco.


It's all going to come down to user experience and versatility. Apple is doing some crazy stuff with the M1, but I personally would chock (most of it) it up more to being clever than true innovation. I think only Intel needs to be concerned right now because they're being outpaced by both AMD and Apple. Additionally if the acquisition of Arm by Nvidia results in what people expect, Intel will basically be losing out in every market.

Going back to versatility, there's no way that AWS or Azure is going to move to Apple silicon for their infrastructure, so there's still a massive market for AMD or other Arm manufacturers to target. For the consumer side unless Apple releases a chip that results in them making a sub-$800 laptop, they're just simply not going to capture a large market. With the rise of devices like Chromebooks in education and home there's still something to be said for Apple devices just being too expensive.

In general, Apple devices also leave a LOT to be desired for any sort of large scale enterprise organization. Unless they go back to their 90's and early 00's mindset, I just can't see this changing.

It's certainly exciting to see the CPU market being upended by Apple and AMD, as well as the potential future for a large architecture shift, but I don't think there's any real threat to market shares as a whole.


You don’t just go and make ever part of your chip architecture wider by being clever. Well, there was probably some cleverness involved, but it’s not like this has been a one-off trick.


By clever I mostly mean, from what I've seen from reports, handling stuff like transition from x86 to arm by baking in instruction translation into the die. Their approach with some things is a bit different from what others are doing, but it's not like they wholly invented a new 2nm process that breaks what we thought the laws of physics were, they're using the same fab as everybody else. Thus their design at the moment is clever.


There’s no hardware instruction translation.


there is little hardware assist though: TSO can be enabled and disabled and thus simplifying x86 instruction translation by implementing the x86 memory model in hardware and thus avoiding unnecessary memory barriers


This is Apple we are talking about. The last 20 years have been tricks.


Hehe, i see what you did here ;)


> there's no way that AWS or Azure is going to move to Apple silicon for their infrastructure

AWS already have a series of AArch64 instance-types which they will most likely continue to improve.


I didn't say that they wouldn't go Arm, I said they wouldn't buy chips from Apple. Apple would never sell it like that.


Current Macs on AWS run via real Apple hardware, so I don't see how M1 will be different. https://aws.amazon.com/blogs/aws/new-use-mac-instances-to-bu...


Right, but they're going to remain a tiny niche. They're priced so highly that there's no risk of them cannibalizing Amazon's own AArch64 instance-types.


I don’t know about under $800, but I wouldn’t be at all surprised if Apple didn’t keep the current M1 Air on the lineup at a lower price when the Airs get their next revamp.

This is exactly what Apple did with the iPad, nowadays you can get a very decently specced 10.2” iPad with their 2018 CPU for $309. That’s astonishing, but it’s a great way for Apple to get extended value out of their silicon.


but my threadripper's are not impressed with some 4 core M1

The M1 has 8 CPU cores, 8 GPU cores and 16 Neural Engine cores. It also runs at 10 watts and maxes out at 24 watts. A Threadripper starts at over 100 watts…


So are we going to have "Core Wars" to replace the "MHz Wars" of previous decades?


No, but we've had performance per watt wars for years now. It's why prior to Zen you only saw AMD chips on the cheapest of laptops, and also now why Intel is panicking.


No probably not. More of a "Benchmark vs. Power Consumption" war I think.

Power users understand today that max MHZ doesn't even matter anymore, for various reasons.

I think power consumption (even in desktops, servers) will be a large factor / marketing number.

I think the big tech sites will continue to hammer home some type of "CPU work / watt" type metric, which is roughly like "time to complete some standardized workload vs. power used to do it"


Some people don't care about performance per watt and only care about absolute performance.


To be fair, 4 of the cores on the M1 are efficiency cores and are very low performance (but sufficient performance, at incredible efficiency, for a lot of casual ongoing background things). An 8 performance core M1 would be a monster.

Not sure if it maxes out at 24 watts. Someone measured wall power of a Mac Mini hitting 21 watts at full load, but that's including power supply inefficiency, all of the supporting circuitry and IO, a fan, flash storage, etc.


> 4 of the cores on the M1 are efficiency cores and are very low performance (but sufficient performance, at incredible efficiency, for a lot of casual ongoing background things).

I think you're underselling the small cores a bit. Their microarchitecture is roughly comparable to a Cortex-A75, which was the high-performance cores in Snapdragon 845 phones like the Samsung Galaxy S9. But Apple clocks them about 26% slower than the SD845 did, so maybe we should be comparing against Galaxy S8 instead: flagship Android performance from 3 years ago. That's capable of a lot more than just casual background processing. Samsung introduced their DeX docking station for smartphone-powered desktop usage back with the S8 generation, when their phones only had as much computational power as the Apple M1's smaller cores.


Pulling other cores into this discussion does nothing but muddy the waters.

The Firestorm cores in the M1 are 4.5X faster at SPECfp than the Icestorm cores. The Firestorm cores in the M1 are 3.1X faster at SPECint than the Icestorm cores.

In my world, the 3.1 - 4.5X faster sibling makes them very low performance, relatively. In a fully working together scenario, the "8" core M1 is equal in performance to a conceptual 4.5 core Firestorm part (see the Geekbench scaling, for instance). This, again, has positively nothing to do with any other core or part.

Bizarre that anyone thinks this is somehow negative or dismissive of the M1. It is an extraordinary chip. But they intentionally, purposefully put 4 very low performance -- compared to Firestorm -- cores because it matches the computing model of users average workload.


But honestly, does it matter?

People buying a threadripper today don't care about it.

What matters to them it's the performance, my gf is a 3d artist, she doesn't care if some cpu consumes more or less power than another (when I say she doesn't care I mean she doesn't even know that CPU have very different power requirements), she needs the fastest gear she can buy on her budget.

Apple M1s are not it.

There are many other that do not care because the hardware stays at the office, where someone else pays the bills and they too don't care because the electricity bill is the last of their problems, fixing it buying more efficient computing devices would mean spending a lot of money in advance to replace at least half of the stock.

How many more months of electricity could that money pay?

A lot.

People buying Apple for its M1 low power consumption are a niche inside a niche.

So I think the reasoning stands: Apple M1s are not a real threat for its competitors because the market that really cares about its strengths is smaller than the one that doesn't and Apple will keep to largely miss the second one.


Laptops have outsold desktops every year for the last decade+. The M1, being a mobile friendly chip, is addressing the larger market. People may not care about power usage from the wall, but they do care about battery life.

Also, the fact that the M1 is even being mentioned along with a desktop powerhouse like the threadripper says it all. The M1 is the lowest end mobile Apple Silicon. We know higher end mobile chips are coming along with desktop ones. The M1 has put all the chip manufacturers on notice, much the same the way the iPhone did (to the point RIM thought Apple was lying about the iPhone because they thought the device was impossible).


It's only being mentioned alongside threadripper because the article went overboard and claimed the M1 "embarrass all other PCs — ... including ... every single machine running Windows or Linux"

This is a joke. The M1s performance is no where near the threadripper. Even on per-thread performance, new AMD cpus perform better than the M1... and they have many more cores. The M1 can only win these comparisons when we start adding all kinds of conditions, like power draw, heat, price point, etc. They need to add those conditions to redirect the conversation, because on raw performance threadripper destroys the M1.

The article made the mistake of leaving out those conditions to make it seem even more impressive than it is... but hyperbole and fantasy will always sound better than reality. Reality requires those conditions. That doesn't detract from the m1's impressiveness.. but it's foolish to forget that and go off and start making ridiculous claims like the article has done.


That was sort of my takeaway as well. I use desktop computers for most of my more demanding work. Laptops are great and (like smartphones, tablets, etc) any improvement in their capabilities is to be welcomed--but there's a difference between a headline that reads "New laptop is the fastest computer ever!" and one that reads "New laptop faster than all previous laptops!"

I don't do the most intensive tasks on thin, battery-powered computers for a reason. I try to do those on the box that's always plugged into mains power with heatsinks and fans as needed, plenty of ports to plug things in, and a big monitor. A more performant portable computer is awesome, but until I can spend a similar amount to make my workstation smaller without losing capability, I'll likely keep using it for heavy lifting alongside a cheaper, less powerful laptop or tablet for lighter work or remoting in.


Android has oversold Apple 3 to 1

Now if we look at what people are buying and ask ourselves 'are these the people who care about TDP?' the answer is simply 'no'

The majority of laptops sold are sub 700, as the previous post said, low budget, low power and in education, markets that Apple has never even considered (except education, but it's mainly college students in USA)

Not saying M1 aren't good, but that they won't shake the market as NVidia buying ARM could

P.s. M1 is a Zen 3 competitor, threadripper has been mentioned for the exact opposite reason: because that's the state of the art for people looking at performances in personal computing (while in HPC ARM is prominent)


> People buying Apple for its M1 low power consumption are a niche inside a niche.

Got it, watching Netflix in Chrome without the laptop getting hot is "a niche inside a niche" while GPU intensive 3d graphics workflows are mainstream.


This is doable on practically anything. Talking about being powerful with low heat/power consumption implies heavier workloads than Netflix because otherwise the "powerful" part is irrelevant.

Of course I'm sure you can find laptops with terrible cooling design that would get uncomfortably hot watching Netflix, but there's plenty of thin and light laptops available for < $1000 that would do just fine too.


Even my 5 year old passive-cooling Core m3 12" Macbook streams Netflix for hours on battery without getting hot.


>watching Netflix in Chrome without the laptop getting hot is "a niche inside a niche" while GPU intensive 3d graphics workflows are mainstream.

Well then i have to say Apple achieved what my 7 year old nexus 5 already could...bravo.


If not getting hot while watching Netflix is the benchmark, I can show you a 400 euros laptop that can do 9 hours of Netflix on battery (or 7 hours of 1080p 60fps streaming)

Don't need no M1

AMD and Nvidia have sold more GPUs than Apple laptops by a large margin


People using powerful laptops care about battery life, though. For someone moving from a "workstation"-class laptop, the M1's value is significant.


>For someone moving from a "workstation"-class laptop, the M1's value is significant.

Em NO, the M1's are not comparable with a workstation class laptops (not cpu wise nor ram...and especially not GPU wise)


> she needs the fastest gear she can buy on her budget.

> Apple M1s are not it.

If a person had a $1000 budget, they could by an M1 equipped laptop from Apple. Is there something significantly faster at that price?


Or a desktop with 64 GB of RAM and two previous generation GPUs


64 GB? From where?

I just looked at Dell and an XPS desktop with 16 GB of RAM starts at $650. You aren't going to be able to add two GPUs and a monitor for $350.

At HP you can get to 32 GB for $650 but again you still need to buy GPUs and a monitor.


64GB of ram cost < 300 euros

A lot of studios sell their old GPU (as in a generation ago) for cheap

If you stay out of vertical integrated supply chains you can buy the best your budget allows

If you are saying that you cannot assemble a laptop of the same quality of an Apple one, I agree

If you are saying that Apple laptops are what people want because they consume 20 watt full load, I disagree

The market of the good enough products is always gonna be larger than 'the best money can buy'

Apple will never make 'good enough products' hence they are not competing with the bulk of what is being sold right now

It could change in the future, but not in the immediate future IMO


I'm not saying any of those things. I'm saying a $1000 laptop from Apple is a pretty good value for a lot of people and I'm not sure you can do a lot better at that price point. When you include things like having an Apple store across town where you can go when things break, it pulls even further ahead.

The best that money can buy changes when you put the actual budget. Today, Apple may be the best that $1000 can buy.


> 64GB of ram cost < 300 euros

Not all RAM is equivalent. If one wants 64GB of the kind of RAM M1 uses, which is very high performance, it will cost more than 300 euros.


M1 is using standard LPDDRx. It's not "very high performance". It uses a different interconnect -- that's it.

I guarantee that 64GB does not cost anything near EUR300.

You might be thinking of HBM[2] which has a wider I/O path and costs more.


>M1 is using standard LPDDRx. >It's not "very high performance".

AnandTech disagrees with you: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

Besides the additional cores on the part of the CPUs and GPU, one main performance factor of the M1 that differs from the A14 is the fact that’s it’s running on a 128-bit memory bus rather than the mobile 64-bit bus. Across 8x 16-bit memory channels and at LPDDR4X-4266-class memory, this means the M1 hits a peak of 68.25GB/s memory bandwidth.

Later in the article:

Most importantly, memory copies land in at 60 to 62GB/s depending if you’re using scalar or vector instructions. The fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before.


Anandtech is comparing M1 vs A14. It's high performance for a cellphone part.

Dual channel DDR3L or DDR4L also has a 128 bit bus. 4200MHz DDR4 is clocked on the high side for most laptops, sure, but it's hardly unusual.

Run the numbers and you get the exact same throughput figure as for M1, which isn't surprising, because we're just taking width * rate = throughput.

So I'll repeat my assertion, downvotes be damned: the memory on the M1 is not special. The packaging and interconnect is interesting. It might reduce latency a little; it probably reduces power consumption a lot. But there's nothing special about it. The computer you're on right now probably has the same memory subsystem with different packaging.


Anandtech is comparing M1 vs A14. It's high performance for a cellphone part.

That’s where they started, but their conclusion was beyond that.

Did you miss the part where they said the fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before?

This isn’t only about A14 vs M1.

It’s not that LPDDR4X-4266-class memory is special; it’s been around for a while. What is special is that the RAM is part of SoC and due to the unified memory model, the CPU, GPU, Neural Engine and the other units have very fast access to the memory.

This is common for tablets and smartphones; it’s not common for general purpose laptops and desktops. And while Intel and AMD have added more functionality to their processors, they don’t have everything that’s part of the M1 system on a chip:

* Image Processing Unit (ISP) * Digital Signal Processor (DSP) * 16 core Neural Processing Unit (NPU) * Video encoder/decoder * Secure Enclave

There’s no other desktop such as the M1 Mac mini that combines all of these features with this level of performance at the price point of $699.

That is special.


> Did you miss the part where they

I don't think that's notable, sorry. I would expect that of any modern CPU.

> it’s not common for general purpose laptops and desktops

Well, yeah, because "memory on package" has major disadvantages. You (laptop/desktop manufacturer) are making minor gains in performance and power and need to buy a CPU which doesn't exist. Apple can do it, but they were already doing it for iPhone, and they must do it for iPhone to meet space constraints.

I think unified memory is the right way to go, long term, and that's a meaningful improvement. But as you point out, there is plenty of prior work there.

> they don’t have everything that’s part of the M1 system on a chip

They actually do! The 'CPU' part of an Intel CPU is vanishingly small these days. Most area is taken up with cache, GPU and hardware accelerators, such as... hardware video encode and decode, image processing, security and NN acceleration.

Most high-end Android cellphone SoCs have the same blocks. NVIDIA's SoCs have been shipping the same hardware blocks, with the same unified memory architecture, for at least four years. They all boot Ubuntu and give a desktop-like experience on a modern ARM ISA.

> There’s no other desktop ... at the price point of $699

Literally every modern Intel desktop does this.


I've seen latencies when pointer chasing (in a relatively TLB friendly pattern) of 30-34ns. Have you seen similar elsewhere?


https://news.ycombinator.com/item?id=25050625

showed https://www.cpu-monkey.com/en/cpu-apple_m1-1804

which determined the M1's memory is LPDDR4X-4266 or LPDDR5-5500. If those memories are not high performance, what is?


You can't do what you do on a desktop on a laptop, not even a good one

Who cares if an M1 consumes less energy than a candle if I can buy 64GB of DDR4 3600 for 250 bucks and render the VFX for a 2 hours movie in 4k?

Another 300 bucks buy me a second GPU

When I deliver the job I put aside another 300 bucks and buy a third GPU

Or a better CPU

vertical products are an absolute waste of money when you chase the last bit of performance to save time (for you and your clients) and don't have the budget of Elon Musk

The M1 changes nothing in that space

Which is also a very lucrative space where every hour saved is an hour billed doing a new job instead of waiting to finish the last one to get paid

You can't mount your old gear on a rack and use it as a rendering node, plus you're paying for things you don't need: design, thermal constraints, a very expensive panel (a very good one, but still attached to the laptop body, and small)

So no, M1 is not comparable to a Threadripper, it's not even close, even if it consumes a lot more energy

When I'll see the same performances and freedom to upgrade in 20W chips, I will be the first one to buy them!

https://www.newegg.com/corsair-64gb-288-pin-ddr4-sdram/p/N82...

Then there's the 92% (actually 92.4%) of the remaining market that is not using an Apple computer that will keep buying non Apple hardware

Even if Apple doubled their market share, it would still be 15% Vs 85%

How is it possible that on HN people don't realise that 90 is much bigger than 10 and it's not a new laptop that will overturn the situation in a month is beyond me


> You can't do what you do on a desktop on a laptop, not even a good one

Ummm... ok. But my comment was not all RAM is equivalent.


Not all cars are equivalent

I guess you don't drive a Ferrari or a Murciélago

And does it really matter to have a faster car if you can't use it to go camping with your family because space is limited?

That's what an Apple gives you, but it's not even a Ferrari, it's more like an Alfa Duetto

It's not expensive if you compare it to similar offers in the same category with the same constraints (which are artificially imposed on Macs like there's no other way to use a computer...)

But if you compare it to the vast amount of better configurations that the same money can buy, it is not


>You can't do what you do on a desktop on a laptop, not even a good one

Yeah… no, those days are over. The reviews clearly show the M1 Macs, including the MacBook Pro outperform most "desktops" at graphics-intensive tasks.

>So no, M1 is not comparable to a Threadripper, it's not even close, even if it consumes a lot more energy

Um… nobody is comparing an M1 Mac to a processor that often costs more than either the M1 Mac mini or MacBook Pro. However, the general consensus is the M1 outperforms PCs with mid to high-end GPUs and CPUs from Intel and AMD. Threadripper is a high-end, purpose build chip that can cost more than complete systems from most other companies, including Apple. However, it's at a cost of power consumption, special cooling in some cases, etc.

>Who cares if an M1 consumes less energy than a candle if I can buy 64GB of DDR4 3600 for 250 bucks and render the VFX for a 2 hours movie in 4k. Another 300 bucks buy me a second GPU

The MacBook Pro has faster LPDDR4X-4266 RAM on a 128-bit wide memory bus. The memory bandwidth maxes out at over 60 GB/s. And because the RAM, CPU and GPU (and all of the other units in the SoC) are all in the same die, memory is extremely fast.

From AnandTech; emphasis mine [1]: "A single Firestorm achieves memory reads up to around 58GB/s, with memory writes coming in at 33-36GB/s. Most importantly, memory copies land in at 60 to 62GB/s depending if you’re using scalar or vector instructions. The fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before."

It can easily render a 2-hour 4k video unplugged in the background while you're doing other stuff. And when you're done, you’ll still have enough battery to last you until the next day if necessary. According to the AnandTech review [1], it blows away all other integrated GPUs and is even faster than several dedicated GPUs. That's not nothing; and these machines do it for less money.

>vertical products are an absolute waste of money when you chase the last bit of performance to save time (for you and your clients) and don't have the budget of Elon Musk

>The M1 changes nothing in that space

This is not correct… seeing should be believing.

Here's a video of 4k, 6k and 8k RED RAW files being rendered on an M1 Mac with 8 Gb of RAM, using DaVinci Resolve 17 [2]. Spoiler: while the 8k RAW file stuttered a little, once the preview resolution was reduced to only 4k, the playback was smooooth.

[1]: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

[2]: https://www.youtube.com/watch?v=HxH3RabNWfE


The M1 beats low end desktop GPUs from a couple of generations ago (~25% faster than the 1050ti and RX560 according to this benchmark [0]). Current high end GPUs are much faster than that (e.g the 3080 is ~5 times as powerful as a 1050ti).

Don't get me wrong - this is still very impressive with a ~20w combined! power draw under full load, but it definitely doesn't beat mid - high desktop GPUs.

(This is largely irrelevant for video encoding/decoding though as you can see - as that's mostly done either on the CPU or dedicated silicon living in either the CPU or the GPU that's separate from the main graphics processing cores.)

[0] https://www.macrumors.com/2020/11/16/m1-beats-geforce-gtx-10...


How much does a 3080 cost? Could you build a complete computer around one for $1000?


You're missing the point. I'm not trying to argue about which system is better, I'm just saying that the comment I'm replying to is saying incorrect things about GPU performance. I'll answer your question anyway though:

You could build a complete desktop system including a GPU that's more powerful than the one in the M1 for ~$1000, but certainly not a 3080. They're very expensive, and nobody has any in stock anyway.

An RX 580 or 1660 would probably be the right GPU with that budget. (Although you could go with something more powerful and skimp out on CPU and ram if you only cared about gaming performance).


- a 3080 costs > $750 . Good luck buying one, I would if it wasn't out of stock. On the other hand the gtx 1050 mobile that is on the M1 can be easily found on eBay for < $50

- yes, you totally can. The best thing is that with a 1k.entry level you can start working on real-life projects that have deadlines and start earning money that will let you upgrade your gear to the level you actually need, without having to buy an entire new machine. The old components can serve as spare parts or to build a second node. You don't waste a single penny on things you don't need.

Even though, it's true, you can't brag with friends that it absorbs only 20 watts full load and the heat of the aluminium body is actually pleasant

It's a big sacrifice, I understand it.


> The reviews clearly show the M1 Macs, including the MacBook Pro outperform most "desktops" at graphics-intensive tasks.

They don't!

Cut the BS

> Here's a video of 4k, 6k and 8k RED RAW files being rendered on an M1 Mac

Blablablabla

That's not rendering


Well, if you're not going to consider about power consumption, Fugaku (and its 28MW consumption) would like a word with you.

https://www.extremetech.com/extreme/311995-japans-new-arm-ba...

I think his point about moving the bounds of the power (and heat) / performance tradeoff is valid.


This is not a completely fair comparison, your threadrippers surely use more energy and produce more heat than the M1 chip.


Sometimes it matters other times not. If you want absolute performance then it doesn't matter, if you want efficiency then it does.


Add 50 more PCIe lanes, double the bus width, 270mb of cache, 60 cores and a communication protocol then get back to me.

Anandtech ran the numbers a couple years ago and an EPYC processor spent around 60% of its budget on Infinity Fabric (up to 90% of used power with only a couple cores being used).

An M2 based chip might get more done, but it won't use much less power. In truth, I'd guess there's a good chance it bottlenecks on the IO and performance doesn't increase as expected (without widening IO further with all the extra power costs).


The fact that you have to compare a high-end CPU against the base level M1 to show it isn't the fastest underlines Gruber's point pretty well.


> Not sure what shitty machine hes talking about, but my threadripper's are not impressed with some 4 core M1.

Gruber greatly exaggerates here. Particularly mentioning the Mac Pro in the same paragraph.

Apple's M1 isn't competing with higher end Intel/ AMD desktop class CPUs with external GPUs. The M1 is a laptop CPU, designed for good performance with good thermals and excellent battery life. It will be interesting to see what they come up with over the next couple years to replace the CPU in the iMac and Mac Pro. But that is next year.


Wait for an 8-core (or the rumored 12 core) variants. Those will embarrass your threadripper.


I really can't wait to see what they'll do in the desktop space. For all the gushing reviews the M1 is receiving, it's worth remembering this is their entry-level machines.

As it stands right now, I have an 18 month old 27" iMac with 64GB ram and an AMD gpu - and an 8GB M1. And I'm preferring the M1 for almost any task that doesn't demand the real estate.


The M1 core is roughly on par with the Zen 3 core, so how exactly would an 8 core part manage to embarrass a 64 core Zen 2 part?


Depending on the benchmark the M1 wins at per core vs the Zen3. The threadripper is substantially behind at per core performance with the Zen2.

The apple also has twice as many memory channels (8 vs 4) and runs the memory significantly faster (4266 vs 3200).

The m1 also has about half the latency to main memory.

So for highly random workloads that aren't cache friendly I'd expect the M1 to win.


They might embarrass today's Threadripper. They don't exist yet, at least not outside an Apple lab. AMD's not sitting on their hands, either.


Then there will be 64/128 5990x parts out


>Wait for an 8-core (or the rumored 12 core) variants. Those will embarrass your threadripper.

It will not, my TR has 64 cores 128 Thread and you know whats the best thing? 128GB Ram...and imagine that..they aren't even soldered...and i even can exchange my GPU..or slap in 2 others.

And if apple ever brings out a M1 Workstation, i can probably buy 3 other Server/Workstations with that money.


Your TR is what, 280 watts TDP? nearly 20 times more than a M1. And is 30% slower in single core.

Firestorm Cores use less than 4 watts each. A 16 firestorm core Mx will produce roughly the same multicore performance as your TR, with a less than 70 watt TDP.

And a $65 M1 dusts a 4800h that costs over 4x as much and using 1/3 the power. Top end ThreadRippers cost $2,000 to $3,000 each. You can bet an Apple Mx ThreadRipper killer isn’t going to cost more than $500 given it only needs 4x the cores of the $65 M1.


At scale, cores are only a small part of the equation.

https://images.anandtech.com/doci/13124/IF%20Power%20EPYC.pn...

Given that scaling, you're looking at 140w between 64 cores which is right at 2w per core. The all-core turbo for TR is 3GHz. If we assume linear scaling on M1 cores, if you dropped them in to replace the x86 cores and the had to reduce from 3.2GHz at 4w down to 2w per core, the all-core turbo would be a mere 1.6GHz AND about a third less cache per core.

TSMC claims 30% less power going from 7nm to 5nm. That means those x86 cores power usage drops from 2w to 1.4w. With the same linear scaling and 4w at 3.2GHz, the M1 design is now operating at 1.1GHz competing with a part that is still 3GHz and the M1 still has only 3mb of cache per core instead of 4.5mb.

Now, I'm betting that actual power usage is 3w which bumps frequency up to 1.5GHz. Still half the frequency, but possibly close enough in performance to Zen 3 at that frequency.

EDIT: It's apparently 3.45w during cinebench according to these benchmarks.

https://twitter.com/panzer/status/1328715510100344833

I'm betting that if they didn't bottleneck on the IO that the M1 would perhaps be faster, but at that scale, interconnects are a much, much larger issue than the cores themselves.


Adding on to that

Zen 3 chiplet is 4.15B transistors. The consumer IO die is 2.09B transistors. The Threadripper IO dies is 8.34B transistors. M1 is 16B transistors. A14 is 11.6B transistors. That difference is mostly 2 firestorm cores/cache, 4 GPU cores, another DDR4 controller, and some Thunderbolt/PCIe.

Taking a ruler to a die shot and multiplying that area by transistor density (16B/119mm^2) gives us somewhere around 500M transistors per core plus 3mb of cache. Adjusting out to 4.5MB per core to match Zen 3 adds another 75-100M transistors.

Zen 3 gives 518M transistors per core, but that includes interconnects. Taking a ruler to the core reveals about 14% of the die is interconnect, SMU, and wafer tests (about 580M transistors). Subtracting that from 4.15B and dividing into even core/cache gives 446M transistors per core.

We'll assume an identical IO cost of 8.5B transistors and 150M transistors of interconnect per core. (64 * (600 + 150)) + 8500 is 56.5B transistors and 420mm^2 total area. A zen 3 chip with similar densities would be 310mm^2.

No matter how you slice and dice that, Apple's chip is going to be significantly more expensive than AMD's chip. Thinking anything different is a pipe dream.

EDIT: also note that formal verification adds a lot to chip pricing.


> Firestorm Cores use less than 4 watts each.

This is just not true. A mini going from idle to a full single-core load is a power increase of 6W. A heavy multicore load hits over 25W.

> A 16 firestorm core Mx will produce roughly the same multicore performance as your TR

With what fantasy math? 30% faster single core performance would be 16 * 1.3 = ~21 equivalent TR cores. It'd be less than half the performance of a 64 core ThreadRipper.

> And a $65 M1 dusts a 4800h that costs over 4x as much and using 1/3 the power

https://images.anandtech.com/graphs/graph16252/119365.png

M1's single-core performance is very strong. But not enough to overcome a ~2x or greater core deficit, either, not at all.


On-die memory is pretty much their one ace in the hole that nobody else will have the guts to match. Intel can't because their fabs are behind, and AMD probably doesn't have the resources to fork their architectures and/or piss off the server integrators.

I can't live without 64GB in my desktop and would love to have more, so the new Macs are not for me.


M1 isn't using on-die memory[1]. It's on-package, and likely little more than just off-the-shelf Micron or Samsung dram modules that are just soldered instead of socketed.

1: https://d3nevzfk7ii3be.cloudfront.net/igi/ZRQGFteQwoIVFbNn (from https://www.ifixit.com/News/46884/m1-macbook-teardowns-somet... )


Thanks for the correction, that's a big difference. Still a severe limit but seems more hopeful for larger options in the future.


"$65 m1" where are you getting this number from? You can't buy an m1 CPU from a store and slap it in a system, you have to buy it in a $1000 laptop.


It's a pretty accurate cost estimate from those tracking Ax series costs. A chip AMD sells for $300 doesn't cost AMD $300 to make, but being a public company their gross margins tell us it costs roughly $150.


But you were comparing this theoretical production cost to the retail cost of other chips, that doesn't make sense.

Hardware companies don't charge cost + some reasonable markup, they charge whatever the market is willing to pay for it. I highly doubt Apple would release some hypothetical "threadripper killer" at a price any lower than they needed to to compete.


My point is that someone building a Threadripper PC has to pay $2,000 to $3,000 just for the CPU.

Lets assume that all the other parts, high speed RAM, SSD, power supplies, cooling, motherboard, etc cost another $500, and that operations, assembly, marketing etc average 20% and the manufacturer aims for a 5% margin. So they sell their low end Threadripper PC for $3300, and their high end Threadripper for $4,700.

Then Apple comes along and makes it's own high core count versions of the M1 (lets call them the M16 and M24) for $1,000 and $1,500 with similar performance to the Threadrippers. When Apple builds high end Mac Pros out of the, they also spend $500 for other parts, giving total parts cost of $1,500 and $2,000, it marks them up 50% for their typical margins, and sell them for $3,000 and $4,000 and have the same performance as the ThreadRippers from the lowest margin manufacturers at for 10-15% less.

And on top of that, the M16 and M24 use less power and generate quite a bit less heat than the Threadrippers. They don't need any fancy cooling systems. That can lead to smaller packaging and even more cost savings.

Obviously when Apples 16 and 24 Firestorm core Mx chips come out newer Threadrippers will have evolved to smaller process as well, and that will help increase their performance, while reducing cost and heat. But there is little doubt that Apple's Mac Pros will be much more competitive price and performance-wise at that time, and if they keep leveraging TSMC first there is a significant risk that they actually take some of their laptop cost/performance advantage up to Threadripper levels.


How does that matter though? His machine isn't sitting on his lap or running on a battery. Electricity isn't that expensive.


It matters because its not just adding electricity, its adding heat. Pretty clearly Apple can add a whole bunch of Firestorm cores to future M1 derivatives and still fit in smaller power/heat budgets that current generation Ryzen and Threadrippers operate in.


Maybe. I don't know enough about CPU design to argue with that. There are others elsewhere in this thread saying that a lot of the energy and heat goes on the communications systems and other resources you need more of when you ramp up the core count.

I suspect we'll never know. Apple have shown little interest in anything other than prosumer workloads for a long time. I doubt they'll be making an M1 server platform anytime soon.


I have to somewhat concede your last point. What percentage of AMD sales are Threadrippers in any year?

It's possible Apple just doesn't think the investment in a 30 core CPU is worthwhile and stops around 16 cores, which would still be higher than any i7 or any i9 ever made.


> And if apple ever brings out a M1 Workstation, i can probably buy 3 other Server/Workstations with that money.

Based on the MacBook Air/ Mac mini/ MacBook Pro pricing, I don't think this is the case.

Apple has delivered a significantly faster MacBook Air for the same price as the previous generation. Not only did they increase the CPU speed, they essentially doubled the performance of their the SSD while maintaining their base price.

Apple dropped the price of the base mini by $100 while delivering a significantly faster Mac mini.

I suspect whatever Apple delivers for their top end machines will be equal or lower price to their current Intel machines.


I'm completely with you.

The hardest programs to run will not benefit here.

I don't need chrome to open up 0.1s faster.


Well it's really good for a laptop, but praising it as the fastest and best cpu is just rubbish.


It is fair to lump Linux in there. Linux is supported on pretty much every largish Arm system other than Apples, at least if you include Android Linux. No other Arm system comes close to matching the M1 in terms of performance.

I don't think that is an indictment of Linux. It is likely if the specs for the M1 were released, Linux could wring more performance out of the M1 than MacOS. At least in some workloads.


I think in the last year or so we hit an inflection point for ARM performance that makes it acceptable.

The Raspberry Pi 4 / Raspberry Pi 400 are halfway decent to use. The Lenovo Chromebook Duet is probably the first ARM-based Chromebook where I haven't felt constrained to keeping only three tabs open (and although it still struggles with some of the largest spreadsheets and dashboards at work, it is a delightful second-screen device).

But I think the M1 still has an edge because of the vertical integration between software and hardware -- if you previously wrote an app against Core Video or Metal, it should get hardware acceleration once you recompile it for the M1. Whereas if similar accelerators were exposed in Linux or Windows you'd probably have to write against a new custom API/library (such as nvenc).


I don't understand this _extremely_ common viewpoint: Intel has been putting out awesome fanless chips for Windows and ChromeOS for a couple years now, and Tiger Lake scores ~1400 single threaded/3700 multithread on 4 cores in GeekBench. M1 is __awesome__ but it's exactly what you'd expect from a Tiger Lake die shrink, which is coincidentally, the same die M1 is on.


I am not sure if the derogatory tone is fair. This is just example how progress works. Not that long time ago Apple had to be saved by some pennies thrown by Microsoft. Not that long time ago Apple was moving from Motorola processors to Intel ones.

There is nothing bad that someone stayed behind. Someone has to stay behind if we want progress. No reason to insult predecessors.

Planc and Dirac weren't making fun of Newton. Minitel was used in France as some simple version of Internet years before Internet was created. Was Minitel bad? Should we make fun of it?

Was Minitel standing in its underwear in front of Internet? Has Dirac stolen Newton pants?

Intel made a lot of great stuff. Now Apple is making great stuff (or, to be precise, TSMC does). Next year someone else is going to make great stuff. Maybe Apple maybe not.

Intel and AMD chips are not that far behind, depending on what benchmark is used, even though they use older 14/10 or 7nm technology.

I also really hate the concept of not being able to upgrade RAM - If I use Mac and I need more RAM I need to throw away my old Apple laptop and buy a new one. So I have to spend way more money, not to mention impact on environment.

So, today, I am still choosing Intel/AMD computer, I can install whatever operating system I wish and upgrade my system to whatever configuration I dream about and I can afford now or in future.


The point about being able to upgrade ram or upgrade to whatever you want doesn’t make sense.

A ram upgrade will never compensate for a less efficient memory architecture.

There is no CPU architecture other than Apple’s that will make your laptop faster because AMD and Intel are thermally limited, so there is nothing to upgrade to.

The only situation in which you are right, is if you are talking about a high end desktop tower.


>The point about being able to upgrade ram or upgrade to whatever you want doesn’t make sense.

If my program needs 64GB of ram, your 8GB highly efficient "memory architecture" can do nothing about it...you have to know that even my gpu has 16GB alone. But i have to say that 8GB are probably enough for your Chrome Browser.


Are you using a desktop tower?


Yes tower form-factor.


> Not that long time ago Apple had to be saved by some pennies thrown by Microsoft.

Agree largely with the rest of your comment, but this particular event was more than 23 years ago. I'm sure a good number of readers of this thread is younger than that!


It's actually really impressive how well Intel desktop CPUs still do considering they're still made on a (continuously improved) five year old process.


I agree with you but I basically filtered that as "typical Mac fanboy noise" and try to search for insightful content which I think this piece has, in spite of its "I have a shrine dedicated to Steve Jobs in my living room" tone.


The whole narrative has changed over the last few weeks too.

"Fastest chip" uhh well no..

"Fast for this one thing", yeah it's 0.1s faster opening that program no one ever said was slow.

"It has good battery life"

So this is the new narrative? Are nerds constantly fighting back against Apple marketing and they are adapting?

Usually their marketing team just insists over and over. Maybe because this is actually measurable (unlike privacy and security) their legal team is holding the marketing team back.


Such a silly argument. An M1 Mac embarrasses no one but prior Macbooks, as it only runs OS X. There is no useful comparison to Windows or Linux. It's as asinine as comparing a mobile phone performance to a desktop PC.


How are 2 laptops running different operating systems not comparable? The similarities of a Windows laptop and a Macbook far outweigh the differences.


Because the software I use that works on Linux or Windows doesn't exist or cannot be run on OS X. CUDA doesn't even support OS X anymore. If I can't run my OS/software of choice, it doesn't matter how great M1 is.


Oh so because this laptop doesn't work for YOU, anyone who compares it to Windows or Linux is making a "silly argument". Gotcha.


I think my original response was a bit overly dramatic. But my point still remains. I don't think you can compare an OS X based laptop to anything else. The software available is far too different, and OS X is much more locked down (and getting even more locked down).


The discussion seems to have settled down to "M1 is great! But you'll soon be able to get a Zen 3 CPU that's just as good." Which may or may not be true, but:

- Apple has a material (and apparently perceptible) performance advantage against the majority of the laptop market - which will do wonders both for their and Arm's brand.

- The M1 has killed the idea (which I've heard a lot) that you can't get performance out of an Arm core.

- There is now a credible desktop platform on which to develop for Arm in the cloud.

I think the real impact of this will be in 2-3 years and it's not good for Intel.


> The discussion seems to have settled down to "M1 is great! But you'll soon be able to get a Zen 3 CPU that's just as good." Which may or may not be true

Even if it is, I doubt there will be a Zen 3 laptop out that has the screen, keyboard, trackpad, and case quality of the MBAir. Even if it were slower, literally no one is building machines of that physical quality.

Some are attempting to compete, like the Dell XPS, but they're still falling short. They're high quality laptops, yes, but people who think they're "just as good" need to take a better look at the current MBAir - and this has 0% to do with the chip. Nobody is making laptops this nice.

I'm actually thinking about picking up an Intel MBAir (the one that just got discontinued) as it's likely the single best Linux-able laptop (no longer) on the market, even if it needs to do an opaque, closed-source, online activation each time you wipe the disk (to reactivate the T2).


I bought a new 13' Intel MBP just before the announcement as I needed an x86 machine for development and was nervous it would be discontinued!


> The discussion seems to have settled down to "M1 is great! But you'll soon be able to get a Zen 3 CPU that's just as good."

It feels worse than that to me. Those people are actually saying "But Zen 4 in 2 years will be just as good!" because their claims depend on combining IPC improvements from Zen 3 _and_ a 5nm process that AMD has no plans to switch to for Zen 3. It's a weird bit of gymnastics in order to reach emotional parity with the M1 widely available right now today.


Without commenting on the gymnastics (I'm not qualified) the big issue for me is about actually having the machines in the market. Some of the Ryzen laptops seem to be impossible to buy and who knows how much TSMC capacity AMD has (after the Xbox and PS cores) plus how much and for how long have the PC laptop manufacturers already committed to Intel.


> which will do wonders both for their and Arm's brand.

Apple’s brand, yes, definitely. Arm’s brand? Not so much.

Apple have made a very concerted effort to ensure that when the public thinks of the M1, that it is an Apple chip, and not an Arm chip.

I imagine that Arm isn’t too thrilled with that.


Arm doesn't care. Their customers aren't end consumers, their customers are large integrators like Apple.


It's done Arm's brand a power of good with the sort of people who read HN and that's probably the audience that they care about most.


Well that big impact will only materialize if Apple can convince developers to make apps for this platform, something they have not been doing well as it is. (with intel offerings)

They need to stop ceding the desktop game market to windows/amd/intel. When Catalina dropped it remove a lot of software, productivity, utility, and games, had many examples where they no longer ran. Worse some developers just wrote off the platform.

Even Apple's M1 debut could not line up more than two or three known developers. Most of it was from developers people had to resort to google for and even then some didn't show up in search lists.

No, Apple's problem with the Windows platform is and has always been software. Apple never seemed keen on pushing the Mac as a platform with iPhone and iPad being the focus on so many levels. Now that they have a truly impressive offering we have to hope they go recruiting developers to the platform because Rosetta won't be here forever and it doesn't solve the availability issue.


Adding up to 1.8m (!!!) iOS and iPad apps (including games) to the platform was probably one of the biggest changes as a result of the M1.


I wish I was a more knowledgeable developer because I see an enormous opportunity to fill those gaps.


If 32 bit devs wrote off the platform, good riddance to them.


I'm not sure I see most of the disagreement here. As Gruber says, Moorhead is mostly complaining about software incompatibilities. But that's part of the total package--especially with Apple which is about selling the total experience. Personally I'll probably switch sooner rather than later especially given that I don't run a lot of native apps on my laptop. But it seems pretty reasonable to critique a new architecture laptop for (unsurprisingly) having some growing pains related to, especially, third-party software. Because people use a laptop to run software, not in isolation.


I had a suspicion upon reading Moorhead's review that he was trying to use familiar enterprise Windows software on the M1 instead of trying to use native apps:

> So far, I have experienced application crashes in Microsoft Edge, Outlook, WinZip and Logitech Camera Control. I got installation errors with Adobe Reader XI, Adobe Acrobat Reader DC, a Samsung SSD backup application, and Xbox 360 Controller for Mac.

What Mac user tries to use Edge and WinZip? A Samsung backup application? Xbox 360? Such a bizarre collection of software that snubs macOS's native browser.


Not to judge a book by it's cover, but mind-bogglingly his "preferred browser" for OSX is Edge. https://twitter.com/PatrickMoorhead/status/13296105698261319...


I guess my question then is whether he uses macOS (note: it's OS 11 now!) as a daily driver or if he's just transplanting familiar software from his Windows habits in order to test drive the M1?


It's still apparently macOS. But, yes, they did bump the major version number.)


The "OS X" branding was dropped in 2016. The current update was from "macOS 10.15" to "macOS 11".


Edge on macOS is a very good browser. Unlike Chrome, it has real privacy features for blocking tracking scripts right out of the box.

The Collections feature is also well done.


So like Brave but not as good?


> Edge on macOS is a very good browser.

.. if you want your manager to spy on you.


chrome has enterprise managed mode just the same, with even more capabilities (i.e. spying features to keep your post wording) than the microsoft ones.

MS enterprise mode can only control settings and extensions, google goes several steps further.


Edge had an ARM-compatible version before Chrome did, which may explain it. The Chrome build for Apple Silicon just came out the other day.


This guy is a tech reporter? But he doesn't know how to take a screenshot.

Says a lot.


To be fair, the shortcut on Windows is a lot easier to remember.


Spotlight search "screenshot" is how I "remember". You get a little widget that offers all the options.


Edge has the collections feature, that I cannot live without. Great tool for research.


the chrome based edge is pretty decent... I'm still mostly using Chrome out of habbit, but there are a few things to like about the Chrome based Edge over Chrome.


To be fair, I'm getting absolutely enormous memory leaks in stock Safari on my new M1 MBP after a few hours of use. It doesn't seem to be reclaiming memory when tabs are closed.

https://imgur.com/a/HqxAspK

There are growing pains, and they're not all in third-party apps. (It is hilarious to me, though, that WinZip a) exists for Mac and b) is used by anyone on a Mac.)


I might be mistaken but, IIRC many browsers will not free memory from closed tabs. They'll keep it and reuse it for new tabs instead of deallocating and reallocating too often as you play with tabs.


Yes, but it's not doing that.

Every new tab incurs more RAM usage; it's not reusing. Usage keeps growing and growing until the system gets sluggish and it prompts me to kill Safari.


If you're using 1Blocker, you might want to turn off the (memory-leaking) 1Blocker Button until an update shows off...

Worked like a charm at least to me :-)

https://twitter.com/kb091412/status/1330470681813934086


If you are using an ad block or other extensions, that is likely the cause. I can't find specific reference, but try turning those off.

edit: https://twitter.com/hondanhon/status/1329132372034347009


A browser isn't viable to me without ad blocking and 1Password.

Firefox has issues currently on M1, as well - uBlock Origin won't load, for example. Thankfully, that particular issue is fixable by disabling javascript.options.wasm and javascript.options.wasm_trustedprincipals in about:config as a temporary solution.

I fully expect these to be fixed, but it's clear it's gonna be a few weeks/months before everything runs right. If I were a non-technical user, I'd be returning it for an Intel Macbook.


Of course, maybe you are experiencing a different issue that also leaks memory, but Dan confirmed 1Password was not causing his problem, just 1Blocker. https://twitter.com/hondanhon/status/1329471604649205760?s=2...

The 1Blocker extension can be disabled as a temporary fix, and crucially, the ad blocking functionality does _not_ have to be disabled.

https://twitter.com/1blockerapp/status/1329744152150626304?s...

In the meantime, disabling the 1Blocker Button extension in Safari > Preferences > Extensions and restarting Safari fixes it. It won't affect how our app blocks intrusive content in Safari since 1Blocker Button only provides quick actions to the main app.


I'm not using 1Blocker, but perhaps AdGuard has the same issue for similar reasons.


He’s not saying permanently disable Adblock, but rather disable it to see if it’s responsible for the memory leak in question.


As far as I can tell, the issue is present without any extensions. I'm not willing to browse unprotected for several hours to 100% confirm, but it still grows with every tab.


Yes, it grows, but slightly. Ever since 1Blocker updated their app to use the older logo rather than the new monochrome logo I am not seeing the disastrous memory growth I was seeing before. After ~5 days of use without restarting Safari it is still hovering at around 800 Mb used, instead of the 10 Gb+ it was using before.


As others have said, I was simply offering a suggestion to help debug, not passing judgement on adblockers.

I have had Safari open for 23 hours on my M1 MBP, and it is using ~800mb of memory right now. Extensions are the most likely culprit.


Ultimately, my point is not "I need help fixing this" - I've gotten Firefox working fine with a few about:config tweaks. My point is "non-technical users are going to run into weird issues they can't necessarily fix right now".


HN seems to really dislike Brave, but it is hands down the best browser for people that care about ad-blocking.


It’s strange, I haven’t experienced the memory leaks but now you’ve got me curious.


Not running mac currently, but one of the things I hated was how the UX was for thinks like keka etc, vs 7-zip/winzip interfaces... they just didn't match how I was used to using the things, also having to extract an entire archive to pluck out the one or two files I was looking for is often a pain.


I don't disagree re: the GUIs for Mac compression apps.

Part of the reason for this IMHO is because your typical developer on a Mac has always used the command line tools for that sort of thing. So, Mac developers were never really motivated to build great GUI tools for certain things.

Compared to Windows, where the built in command prompt was lacking for so many years, and as a consequence 7zip/WinZIP/WinRAR gained really robust GUIs.


It's ironic that you mention that, as I often used to just defer to the command line as the UX for the gui apps was so bad. I've always been centered around client/server or web app development or the backend and it was never painful enough to make something for myself. (scratch the itch)


Wow, that’s a lot of memory use. Would you be willing to file a Feedback Assistant report with a sysdiagnose? Would like to see what’s up here. How many tabs is this?


This is after a couple hours of use. It stays high until quitting - closing all tabs doesn’t budge it.

Happy to file a report. Any special instructions to make sure it gets to you? edit: Filed one; FB8927230.


To be a "Mac user" you have to give up the software you prefer, and use the software Mac prefers you to use. And you don't get to be unhappy if the software you prefer doesn't run OK? I am so happy to not be a Mac user, even if I'm sad I won't be trying out the great M1 CPU any time soon.


How is this different than any other OS? My preferred software is Mac or *nix based, so by your logic:

To be a "Windows user" you have to give up the software you prefer, and use the software Windows prefers you to use. And you don't get to be unhappy if the software you prefer doesn't run OK?


I can't tell but I think you're making my point. As a Windows user, I can absolutely be unhappy if software I prefer does not run. If someone comes around and says "but that software was originally for Mac... you aren't allowed to expect it to run well on Windows! A real Windows user wouldn't complain about that software."

And if the software I prefer runs better on macOS, I have a good reason to switch away from Windows. Just as if software you prefer runs better on Windows, you can complain about macOS (or in this case, incompatibility due to the ISA switch.)

My issue was with the comment using "mac user" as an excuse for software that isn't what the "true" mac user considers essential not working well.


Oh, I think I completely misunderstood your original comment.


MacOS is extremely good at supporting software originally developed on the OSes. After all you can get all the flagship MS Office apps on the Mac in very good iterations, plenty of Windows first vendors do support the Mac with version of their software, there are virtualisation and emulation solutions, and a lot of Linux software compiles and runs fine on the Mac as it's also a flavour of Unix.

In contrast, if you take the vast majority of MacOS first apps, there is no way you are ever running the vast majority of that stuff on other platforms. A wide variety of games even run on MacOS either natively or through compatibility layers like Porting Kit. In this respect out of Windows, Linux/Unix and MacOS, I think it's clear that MacOS wins hands down. It's not even close.

So given that this is pretty obvious, and I have a hard time believing you aren't perfectly well aware of this, why on earth are you trying to make this argument. Is running other platform's software really a primary criterion for you? In which case, the Mac should be right up your street.


But that goes equally across platforms. When I'm on a windows machine, I miss the hell out of Keynote and Bear, for example.


Exactly. So you either go back to using macOS, or you complain that Windows doesn't run the software you like. It's OK to do that for any OS you run. It's a valid reason to not like an OS, or in this case, an OS that is going through a transition to a new ISA. Until it runs the software you prefer, it isn't ideal.


Sure, but it doesn't necessarily make it an objectively poorer product just because it's hard to swim against the current on that platform. Every platform necessarily has some expectation of doing things a certain way, and when things are done that way the experience will be better/easier.

You seemed to be implying that this was some unique problem with OSX and that's what is holding you back from trying M1. But really, it sounds like what is holding you back is just the normal barriers of changing platforms.

Personally as a Windows user I am hoping that Windows for ARM gets supported on M1 soon, but I don't think it's so unreasonable we're not there yet, and I am not going to discount the whole platform just because of it.


I do not hope to imply that this is a unique problem on OSX. I just disagree a bit that you have to conform to a standardized way to use a platform. Perhaps you do, with OSX, and it is a unique problem there? Obviously due to market structure, Microsoft gears Windows to "everyone" without being the "best" for many people, although it tends to be the best for PC gaming.

What is holding me back from trying the M1, personally, is cost. I just paid $950 for a Ryzen 7 4800H with 16GB and GTX 1660 Ti. It can't hold a candle to an M1 Macbook in battery life, but I know I can run my programs and play my games and not think twice about it. My failsafe is an electrical outlet. I'm hoping I can end up trying an M1 (or successor) through an employer though! I spent the past year using a ~2014 Macbook Pro through work, and for work things, it was better than my Windows machines for working on projects where everyone else used bash on a Mac. As WSL improved, I was able to be equally productive on my Windows machines and tend to use them and not bother with the Macbook Pro.

That's all a lot of extra anecdote you didn't need, but to be sure, I'm not trying to single out OSX - the OP singled it out by saying, essentially, "you're using it wrong!"


FWIW my main point was that Moorhead is likely not indicative of the average Mac user.


>WinZip

https://www.winzip.com/mac/en/

>Xbox 360 Controller for Mac

What's sketchy about that?

Apple is pushing the "play any iOS game on the new Macs" hard. And a lot of them are supporting controllers natively on iPad and iPhone so obvious someone wants to try that out too.


I'm confused. Is that author running some "Xbox 360 Controller for Mac" application?

I remember there used to be some special apps back in the day, to let you use a 360 controller on MacOS.

I wonder if the author knows that you don't need those apps any more?


yep, that's exactly what he's doing, and its not compatible with Big Sur because its a kext - nothing to do with the M1


What Mac user tries to use Edge...?

This Mac user, having just discovered that there's a macOS version of Edge. TIL...

EDIT: well, you're off to a good start, Microsoft:

https://imgur.com/a/IWZMlMl

Instead of a browser, I guess I installed demons. I wish they'd get this shit right. MS Teams always has this notification window hiding in the background so that whenever one alt-tabs to Teams the window doesn't actually have focus and your typing goes to /dev/null. And now their browser does this "fit-and-finish" crap that few muggles will parse.


Gah, the exact same thing started happening for me with teams this week and it’s been driving me crazy.

I guess I’ll have to rm it, all its junk in ~/Library and see if it gtfo’s and starts behaving again..


I have experienced many crashes of these on Wintel. What a surprise that they crash on M1 Macs!


I just installed WinZip - runs smoothly on my M1.

Ditto Microsoft Edge, installed, ran a few websites - no issues, aside from a bit slow speed (although it may be just a crappy browser, never tried it on an Intel).

Perhaps with some heavier usage there are issues, but so far so good.


Wait, WinZip is available on a Mac?

I know its not the point, but... what's the point of using WinZip on a mac when the OS has built in unzipping?


More granular control, more compression options.


Windows has built in unzipping too, but I still find WinZip really useful (for zipping, and fast unzipping the way I want).


That does sound a little sketchy... though I can see wanting to use the 360 controller, would like to see a review on how well most of the Steam supported games run for example. I'd been considering doing an ITX build for the living room, mostly for emulation/games... the new mac mini is definitely in the consideration, and seems to sit right between a lot of the nuc/sff options and mini-itx options and punch above it's weight against the latter.


I think no additional software is needed to use the 360 controller on an M1 Mac.


Enterprise users often don't have a choice. Microsoft is pushing very hard to get Edge set as mandated browser at enterprises everwhere. In work we're also moving away from Chrome for this reason.


Macbook Pro user since 2007.

I use Chromium Edge. Runs great on my Intel 16" MBP.


I read Gruber frequently even though he annoys me (he's sort of like John Dvorak that way), and this piece is kind of emblematic of why he does. To be clear, the ARM Macs are way more exciting than I expected, and I'll probably buy one at some point. I also do not think Moorhead's Fortune article was perfect, but Gruber's righteous indignation is grating.

These statements are pitted against each other, even though they are both very much true:

> emulating or translating apps ... is necessarily going to be ... somewhat incompatible at best ... But Apple’s Rosetta 2 ... is a technical marvel.

He's upset about this quote from Moorhead's review:

> there were some very positive things about the new laptop. The new M1 processor is impressive, but far from perfect — it has many warts

Moorhead perhaps punctuated this poorly, but I think it's reasonable to assume that the warts refer to "the new laptop" in general, not just "the new M1 processor" in particular. Yes, his problems are generally software related (although he cited fan noise as a complaint), but that's a huge part of the product. Lots of people would love to use an M1 Mac without macOS, including one particular Finn.

Gruber can say, "There is no 'Well, here’s the downside' with regard to the state of Apple Silicon," but that remains to be seen. How much of the M1's advantage is due to a trillion dollar company buying up every last bit of TSMC's 5nm production capacity? Can they scale the unified memory architecture much beyond 16 GB? If they can, great; they've truly lapped the field.

> There is no balance, if by balance, you’re looking for a story that says any PC hardware, ARM or x86, is competitive in any way with the M1 Macs for low-energy computing.

This review wasn't about "low-energy computing." It was about using a new Mac as a replacement for an old Mac. This is a real, shipping product, not a beta. You can snicker at Moorhead's Microsoft-centric workflow and choice of apps., but that's his perspective.


>How much of the M1's advantage is due to a trillion dollar company buying up every last bit of TSMC's 5nm production capacity?

Yes, it's very hard to separate process from architectural (and implementation) design. And, as an industry, we're probably sticking our fingers in our ears more than we should about what happens when process shrinks really truly stop happening. And they will. There are still a variety of options but there's very little evidence they have anything like the legs of CMOS process scaling even if we do dump a big part of the problem on developers who don't get to develop for a largely unified architecture any longer or lean as heavily on high-level abstractions.


He may have wanted to say “the hardware is great, but not all software is ready for it, and it is consumer hardware, so it may not have enough ports for you”, but if so, it’s badly written.

Certainly, that second paragraph stating the M1 has many warts and is only fine for users who have a lot of money, to me, conveys a different message.


The disagreement is because most M1 reviews/comments were a bit of fan fiction - ranging from custom instructions for JavaScript, to "order of magnitude" perf per watt gains. Just as it doesn't make sense comparing the performance of the fanless M1 with a desktop CPU, it doesn't make sense comparing the perf-per-watt either; they are just optimized for different parameters. From what I've seen with the more objective benchmarks (not just geekbench), the Ryzens don't seem to be doing worse if you account for 7nm vs 5nm.


Complaining about incompatibilities in software used by less than 5% of Mac users and ignoring the strong compatibility of software used by 95% of Mac users seems misleading.


Equally, talking only about the stuff that works great and completely ignoring the incompatibilities that do exist is also misleading.

Every other review out there covered the strong compatibility and how amazing the M1 is. I for one am glad that there exist a least a couple of reviews that focus on everything that doesn't work. Let's be honest, no one these days makes a purchasing decision based off of just reading the first review they find.


I agree but these reviewers should weight and categorize the problems they find. For example reviews have been pretty consistent that

1) Graphic Design software: Works well out of box. Key Adobe apps not yet native and have some minor issues under Rosetta, but still work, are still fast and should be much faster when native.

2) Video Production: Unbelievably fast, on par with high end iMacs and Mac Pros and should get faster as all tools go native.

3) Software Developers: Super fast builds and tools. Homebrew not native yet but can still use with Rosetta version of terminal. If you need Docker it’s months away.

4) niche Users of esoteric apps ported from windows: this is the group Patrick reported in. Yea, some of these apps are months away from being usable but all have perfectly cromulent native replacements.


no docker - no buy.


Docker not working sure was a deal breaker for me. That's the sort of thing I would expect to work out of the box. A huge number of developers use that daily and they seem like a pretty big chunk of Apple's target market for macbook pros.


No. Developers are a tiny, tiny fraction of the market for MacBook Pros.


I don't care about any Apple products, but I really do hope that M1 will force all other manufacturers ship good ARM laptops. Or force Intel and AMD to pull a rabit out of their hat to match the performance/watt of ARM chips.

I want long battery life, no fans and a lot of memory. Running Linux. If M1 is the catalyst to make it happen, then so be it.


Don't get your hopes up. My prediction is, that yes, there will be a slew of new ARM laptop's. And even though the technically run ARM chips, it will only be a marketing selling point for new. They might even cram Snapdragon's or other mobile device CPU's in there, just to be able to put a ARM sticker on a plastic frame with an folding LCD panel and keyboard. They will sell it as beter than Intel, because that is what people who can afford an Apple will tell them is so great about their new laptop. In the long run, due to this, the Windows ecosystem might adapt to embrace ARM as a whole and then we will start to see competetive ARM PC's. But it will take a while.


Ultimately, it's all about Windows. I know it's hard for many of us who basically hardly see anything other than Mac or Linux laptops to really grok, but something like 90% of the laptops out there run Windows (if you exclude Chromebooks which AFAIK are mostly used in education).


Indeed, this is why Apple is able to take this leap and the PC platform hasn't so far. They don't have to wait for as many partners to get along for the ride.


I think more accurately: Apple won't wait for partners to get along for the ride.

Backwards compatibility has long been a Microsoft obsession (out of necessity for the enterprise market) and an Apple allergy.


I'm not sure there's another company of Apple's size and influence which has consistently been so ready to dump legacy at least a beat before its customers were prepared for it. Headphone jack, 5 1/4" floppy, I'm sure a bunch of other things I'm forgetting.

"I'm going to dump Apple over this!"

"See ya. Don't let the door hit you on the way out!."


Pretty much every I/O port Apple has ever used they dropped before the rest of the industry. PS/2, parallel, serial, USB-A...

Of course optical drives; that was a big step.

Apple is ruthless at pruning old tech.


Does anyone miss floppies? CDs? What about SCSI or ADB? I can't think of one thing Apple dropped that caused such an uproar at the time that people regret now. Sure, I bitch about my laptop not having an SD reader, but Apple hasn't shown me what's next for its replacement other than thinner (meh). SCSI->USB/Firewire->USB2->USB3->USB-C this all made sense.


When did Apple use PS/2 ports? It went D-sub -> ADB -> USB, as far as i remember.


You're right; I was accustomed over the years to point to PS/2 as an example of the PC world being slow to give up on an outdated port, and carelessly conflated that with Apple using it.


It's also because Microsoft half-assed their Windows-on-arm attempt. They made something akin to Rosetta 2 (x86 to arm translation) but only supported x86, not amd64. Apparently they are working on 64bit support, but releasing something half done doesn't inspire confidence. Even the 32bit support was reportedly not very good/fast, compared to Rosetta 2, anyways. If Microsoft steps up their game then I can see Windows on arm starting to take off, even if Qualcomms chips trail Apple's M1 in performance.


MS x86 emulation will likely be far slower if they don't have access to the memory model trick.


And partners also means customers. A ton of those Windows systems (and I should have said systems--rather than laptops--as I don't know how things split out on laptops specifically) are going to non-technical users in big companies who absolutely do not want any changes that alter a menu or move an icon or introduce some incompatibility they have to deal with. They absolutely do not care whether there's some significant price/performance/watt boost if it otherwise introduces even mildly disruptive change.


Frankly, I think that the willingness on the part of some, like Microsoft, to cater to that degree of hidebound thinking is actively harmful to us as a society, especially in terms of technology.

The more we enable people who are unwilling to even try to learn that this particular option has moved from the Edit menu to the Format menu, or has removed the default toolbar icon (but you can add one), or has simply moved an inch to the left, the more we send the message that it's normal and expected that people shouldn't have to ever adapt and change and learn new things.


On the other hand, we have a lot software in the industry that changes regularly without any perceivable improvement. Google is known for relentlessly pruning whole products that people were happily using but they're also pretty aggressive about removing features from existing products.

One big example of that is the removal of community captioning for YouTube videos. Call me hidebound but I am not happy at all with this, even though I don't use it very much. Captions are critical for making videos accessible to the deaf and hard of hearing communities. The removal of this feature is an ugly snub to a vulnerable group. Now they will be wholly reliant on the video creators to do the captioning themselves or otherwise be stuck with the lousy automatic captioning.


Sure; that's also a problem.

There is a balance to be struck, and between us, I think we've more or less bracketed it.


Apple took this leap because their CPUs are sufficiently fast. No one else has managed that.


Nitpick: The M1 is also essentially a mobile chip, very close to the iPad version.


What makes it mobile, the device it's in? If I have one in my desktop or Mac Mini, it's no longer mobile. The power draw? Everyone benefits from it using less power regardless of use. Form factor? Why does size physical size dictate small == mobile?


I was responding to a post that talked about putting Snapdragon or similar ARM chips in laptops in a deprecating tone, and was making this point exactly.


I agree. Competition is awesome for consumers. I don't care much about macs either (although I'm really interested about Mac Pro Mini rumours), but I know that Intel and AMD will invest lots of money into research right now as not to become obsolete.


> Or force Intel and AMD to pull a rabit out of their hat to match the performance/watt of ARM chips.

AMD has been great (and improving) on performance/watt. It appears that the M1 has the upper hand, but then again Apple bought all the 5nm capacity for the year. I'm really curious how well 5nm Zen chips would compete non performance/watt - I suspect Zen 4 (5nm) will compete very favorably.


> Or force Intel and AMD to pull a rabit out of their hat to match the performance/watt of ARM chips

I believe I read an article the other day that explained why the M1 was so fast and efficient and basically the design it came up with is impossible to do on Intel, for many reasons including the fact that Intel instructions are variable-length (which means they need to be completely parsed every time to find the end of the instruction) while M1 instructions are fixed-length (which means you don't need to parse it) due to the RISC design (finally a winner).

Methinks we are witnessing the death of the traditional Intel chip API as the bad design decisions made along the way come to a head.

Should make things interesting as now both Apple and Linux have a head start on ARM support.


Funny thing, you could make a constant length superset of x86-64 with backwards compatibility, and it's even going to run fast...

(The instruction decoder would become more of a translator.)

The bigger problem with x86 is the implicit hard synchronization barrier after most writes. This cannot be easily fixed without introducing new instructions altogether.


Microsoft has a version of windows running on arm in the surface pro. Wouldn't that give them the headstart on apple?


Apple had versions of iOS running ARM on phones and pads since how many years? They used their head start already to launch themselves into their ARM computers.


And it's not particularly relevant, but the Newton was an ARM device, so Apple's head start is even longer than that.


ARM was a joint project between Apple, Acorn and VLSI, so you could say that they had a head-start on pretty much everyone except Acorn, who invented it in the first place.


What’s the Windows app compatibility look like on those, though?


Most UWP apps should be easy to compile for ARM. Emulating x86-32 apps on ARM Windows has been a thing for many years and the further expanded x86-64 support is in Insider builds.

Really the bigger issue is that the ARM chips Windows has thus far run on don't have the similar TSO opts the M1 came with.


More ARM SoCs means more machines that will never see Linux or *BSD support. A lot more work[1] needs to go into supporting each individual SoCs than x86 machines require.

[1] https://elinux.org/images/a/ad/Arm-soc-checklist.pdf


Linus said no to Linux on M1.


The parent comment doesn't say it has to be M1, just that M1 could be the push to make it happen elsewhere.

(also, even if it were about on M1, Linus said he thinks its unlikely, other qualified people disagree)


> I actually discovered that I’ve had an instinct of measuring my MacBook’s CPU usage by feeling the heat on the strip of aluminum right above the Touch Bar, and I can’t even do that anymore now.

Market opportunity here for a peripheral that's just an adhesive strip you put above your touch bar, and it outputs heat corresponding to CPU usage ;)


Good lord, don't touch there! You're risking burning your fingerprints off, then you won't be able to easily unlock your computer and phone!



lol didn't even have to click the link to know the joke


My new M1 Macbook Air arrived today. The excitement was quickly replaced by sober frustration.

Yes apple did a remarkable job with the M1, no they've still lost their mojo when it comes to high quality software.

I've spend the last 3 hours trying the wipe and reinstall the thing, because their non-virgin-os migration assistant is simply unreliable.

So far, I've had issues where the fused Data/OS drive would just split apart, with no clear way on how to properly format the drive back to an installable state. Disk utility giving no indication whatsoever on which level of nested APFS concepts a wipe should even occur. And with absurd error messages like "can't install MacOs, no admin user found" - 'yeah no shit', during install. I really really really wanna like this machine, but MacOs is stuck in this limbo of not wanting to give you any control, yet not taking over those functions for you, so you constantly have to guess what's doable and expected.

Update: Right now I'm trying to reinstall Mac Os via a bootable USB drive, which is actually what's recommended by apple support. (hello mid 2000s)

This is basically my last option, before having to bring it into an apple store.

The previous method apple recommended involved downloading Big Sur via the command line, which didn't work for me, becaue curl failed to write to disk.

I mean, how do you even manage to break curl...

Feels just as much work as setting up a linux laptop...

Final update: Yep, it's dead jim. Apple support suspects a hardware failure. Back to the store you go.

I'm gonna spend that money on 4 DevTerm and a box full of beer.

cheers


> I've spend the last 3 hours trying the wipe and reinstall the thing, because their non-virgin-os migration assistant is simply unreliable

Did you try the migration assistant before nuking the machine? Maybe it didn't work for you in the past, is this still the case?

I mean, you have taken a perfectly usable system and decided to wipe and reinstall from scratch. That's not a normal user experience.

Most users are NOT going to wipe a brand new machine. Should anything break, they will reinstall from a Time Machine backup.

Or, worst case, they will Command + R on boot and reinstall from remote recovery. Why isn't that an option?

This can hardly be blamed on Apple. Before messing with the system, one should be prepared to spend time bringing it back up, including any mistakes made along the way. Last time I wiped a Windows 10 machine I had even more problems than that - made worse because Windows recovery is borderline useless (and can't install the OS from the internet). But hey, I knew what I was signing up for.


You have demonstrated exactly why I don't care how fast the M1 runs, I'm not buying Apple hardware. Not until I start hearing that they have done something about their long quality slide.

See https://news.ycombinator.com/item?id=25238688 for the last time that I expressed this opinion on HN.


Comments like these always make me wonder if I'm the only person in the world still delighted by Apple products. I mean I alone can not be responsible for their sales volume.

And then I start to wonder if I should be more critical of Apple, so I fire up my Windows machine and think silently "No, this is still a worse experience". So I fire up my Ubuntu and have the same reaction. I just don't understand the vitriol some people have for Apple, to me it is hands down always the better platform.


At least for me the reaction is largely emotional. I like using my linux desktop because I can set it up exactly how I want, and macOS just doesn't roll that way. Now, I'm not really the target market, so whatever, but there are a few things that upset me about it:

1. I could really, really love macOS if they had a "don't treat me like a child" checkbox I could hit somewhere that would turn off some of the more obnoxious default behaviors. It took me a solid 5 minutes to figure out how to navigate to the root of the filesystem on a new MBP I recently was given at a new job. The most intuitive thing I could think to do was double click the name of the folder I was in ("Recents", by default when you first open it), which seems to do nothing except make a little icon appear and grow/shrink the window in Big Sur. Neat. The left pane, by default, literally has no means of navigating to the filesystem root. OSX has seemingly always wanted to pretend the filesystem doesn't exist. I like the filesystem. Stop trying to hide things from me.

2. This point is actually more a credit to apple than anything - the rendering, touch input, display management, etc on macOS is stellar, and it makes me angry I can't have the same level of polish on my linux install. I really, really want something that is as smooth as the macOS experience but with the customize-ability of linux. Apple pisses me off because they are incredibly close to that goal but they find it philosophically objectionable (for business reasons that I understand, but it still pisses me off)


> I like using my linux desktop because I can set it up exactly how I want, and macOS just doesn't roll that way.

I'm not trying to have an argument here, but I've never really understood what is the "exactly how you want" that Mac OS doesn't provide? Would you mind sharing some details? Is it just a matter of not being familiar with the places to change the defaults to something more to your liking, or is it actually missing the ability to do something you need?

I've been a Mac user since the 80s, I write software for a living, and I understand that among tech folk there is a continental divide between convention and configuration, and I think that might translate to choice in OS, too. Seems to me that most Mac users are on the convention over configuration side, and most people who are anti-MacOS are the opposite; they want to configure every bit. Just wondering if that's the case here, too, or if there's more to it specifically.


To some extent you're totally right. I can elaborate.

Some of it are technical choices Apple has made that make things hard. I like tiling window managers. The best (only?) one for OSX now requires you to go disable SIP to use it: https://github.com/koekeishiya/yabai/wiki/Disabling-System-I... -- this wouldn't be the case if Apple cared at all to support these sorts of use-cases, but they don't, so it suffers collateral damage from their attempts to improve security. (Again, totally reasonable business choice. Not a ton of people in this boat, but turns me off. Why can't I do this by clicking something in system prefs like when I open a downloaded app?)

Another one is, as a developer who once worked at a shop that shipped three-os software, their policies relating to macOS licensing are comical. It's almost like they don't want you to develop for Mac at all unless you're going to exclusively develop for Mac. We had moved everything to the cloud, but we still had a room full of stupid mac minis just to run our build farm. Ridiculous. The new news about mac minis in AWS is an ever-so-slight improvement.

Some of it is definitely the familiarity you describe. I don't want to have to get used to all the weird bsd-ish-but-not-really versions of the common CLI tools. I have no interest in learning apple script, etc. I'm being fairly obviously hypocritical because I was willing to learn a bunch about obscure config files on linux. But, my experience has often been that things I would be willing to invest to customize on macOS are simply not customizeable.

And, hey, perhaps I would've been willing to go through all that were it (reasonably) possible to run OSX on commodity hardware, but as a kid learning programming in the 90's macs were damn expensive compared to windows alternatives. So, I grew up on windows. It's still true. Today I could afford a Mac Pro, but it's a laughable value proposition (easily 3x-5x the cost for equivalent power) compared to building my own linux box (as to the 'your time is money!' counterargument people sometimes make here - it takes me about 2 hours to build a PC and my linux box hasn't panicked or booted improperly once). This used to make sense to me when the OS/hardware was more tightly integrated, but I don't really see how a macPro is any different than the equivalent PC you could build with the same CPU/GPU combos except more expensive and less flexible (admittedly, with a stunning case -- their manufacturing quality is incredible). Maybe they'll realize those benefits again with the M1, which is certainly cool.

Lastly, I'll never forgive them for starting the trend of eliminating 3.5mm jacks on phones.

TLDR; You're probably right. Macs aren't a great choice if you're a power user (read: control freak) with your PC.


I'm actually right there with you with your development related digs on Apple. They're draconian, and it's only gotten worse in the Tim Cook era. The major beef I've had with them the last decade are the whole slew of absurd (imo) hardware/design decisions and tiny oversights that never would've flown under the old Apple. Losing the headphone port to get a slightly thinner phone nobody was asking for? Shipping those damn butterfly keyboards? Ditching MagSafe power cords for "USB C everywhere" (but not on phones and not all usb C cables are power cables)? Pushing the Touch Bar like it’s the solution to everything when it solves nothing? Da fuq? I could go on. Just one silly thing after the other while I imagine them all patting themselves on the back for a job well done while watching that money pour in. And the prices! Macs were always expensive but damn did this last set of Pro level laptops and desktops get stupid expensive.

It's almost like they put a numbers guy in the CEO role of a formerly design obsessed company and the metric turned from "is this insanely great?" to "will this get us to the _next_ trillion dollars?"

So yeah, as a lifelong Apple user I've had some major beef. There's plenty to complain about. But with that all being said, I’ve been pretty happy experience and productivity wise running an old 2013 MacBook Pro and and a tiny iPhone SE and never considered leaving the Apple ecosystem, because honestly… what’s the alternative? Windows feels like nonsense to me, Google is privacy nightmare, and going full Linux feels like… work. I'm sure it would be possible to carve out a nice system and find some linux gui that doesn't look like complete garbage, tweak my settings just right, and find all the free versions of the apps I need. But at the end of the day I don't think I'd get that "it just works" feeling I still get with Mac OS—for me I'd think it'd feel like a successfully completed science project.

So for me, even at their worst, Apple is still best. And this M1 news and the new iPhone form factors (the 12 mini, specifically) has given me some hope that maybe they haven't completely lost their way.


> Why can't I do this by clicking something in system prefs like when I open a downloaded app?

If you could change it in system preferences it'd defeat the entire point of SIP, which is to run the OS rootless. Your user isn't privileged enough to disable it which is why you need to boot into recovery mode in order to disable it. This feature isn't unique to MacOS, either [1].

And of course, if you don't like it, you can disable it once and never have to think about it again! At worst you're running with true root privileges, which isn't different from linux.

For what it's worth, Yabai's tiling actually works fine without disabling SIP. Your link enumerates the features that requires disabling SIP.

> But, my experience has often been that things I would be willing to invest to customize on macOS are simply not customizeable.

A lot of the time it's definitely more convoluted to customize MacOS though i am curious what sorts of things you are trying to customize that you couldn't.

> (as to the 'your time is money!' counterargument people sometimes make here - it takes me about 2 hours to build a PC and my linux box hasn't panicked or booted improperly once).

When people say "your time is money" with regards to using Linux, it's rarely about building a computer and generally has to do with using the OS itself. 

> TLDR; You're probably right. Macs aren't a great choice if you're a power user (read: control freak) with your PC.

I think it's more that it's not a great choice if you don't want to invest the same amount of time you've invested in learning how to be a power user in a different operating system.

[1] https://www.freebsd.org/doc/en_US.ISO8859-1/books/faq/securi...


I think your last point is definitely valid. Best counterpoint I could make there is investing the time in linux is probably more widely valuable/applicable in terms of being able to apply it on the job.

I learned a bunch of linux stuff for fun, and it's helped me a lot in my job. I dunno if I could say the same about macOS.


Thanks for a really balanced comment. Just on the ability to customise point - I've always thought that the Mac strikes a good balance - most things I want to change I can (including what is shown in the Finder sidebar) but also things that I really shouldn't be spending my time on I can't. Much as I like much of the linux distros I've tried I spend too much time tinkering!


> Comments like these always make me wonder if I'm the only person in the world still delighted by Apple products. I mean I alone can not be responsible for their sales volume.

I agree with you, but I've admittedly never used macOS migration assistant because I find it easier to just copy over a few directories and get to work.

Even when I've had Apple products fail, I've had great experiences with support in and out of warranty. And the little touches how devices interop and the like are a delight.


I have had too many issues that were too complicated to solve. Add to that weird hardware issues (the touch-ic debacle hit me and apple was shitty about it, 2 failed SATA cables (WTF!!) and the apple store wanted to charge me $500 and burn my data. What was the chip that magically became desoldered in newer macbooks? I had that. Apple fixed it not by resoldering, but by putting a rubber thing on the chip that made the chassi push it in place. For that they 1. deleted all my data and 2. claimed warranty didn't cover it. I ended up having to go through the Swedish consumer agency.

I think they make decent stuff but are quite possibly the worst company I have ever dealt with. I will never buy an apple product ever again. It was like an abusive relationship.


Mac always works great for me, the hardware (post keyboard fix or pre keyboard problems) is good, so I stick with it.

For 95% of users, Mac just works with the exception of how MS Office programs can be pretty meh on Mac.

I have tried Linux machines at work and at home, and I appreciate the open values of Linux, but I think it's few years before touchpads and multi-screen graphics are really as slick and reliable as Mac for everyone not just HN power users.

The tinkerer in me will buy a Linux machine eventually, but I have tendency to tinker instead of getting things done, which prevents me from going Linux at the moment. Mac OS just works, feels, and looks good by default and gets out of my way.

I am also waiting to see how the next couple of years of CPU wars plays out before committing to a Linux laptop like a ThinkPad with all that stuff soldered in place.

Can't see myself ever going back to Windows which I grew up with. Windows 10 just looks and feels and acts miserable.


Apple's better, but not nearly as good as they could be. Even MacOS doesn't follow their human interface guidelines. The butterfly keyboards really are sensitive to dirt, it's a problem. The touchbar is bad, it's really bad. MacOS does not have much of a lead over windows in terms of usability, and is much worse at multitasking (not at a technical level, at a UI level.) The obsession with thinness is a bit much, and they have pushed it to the point where they have compromised internal reliability and build quality and cut off ports and useful features for millimeters no one cares about. Their machines are now hot glued together and nigh unrepairable and unrecyclable, and so an environmental catastrophe as well.

They can do better. The M1 is an example of the kind of thing they're capable of pulling off from their position. It's much like Warren Buffett once said, Berkshire Hathaway, the company that made him one of the wealthiest men in the world, was the biggest mistake of his life--the opportunity cost was going into insurance, where he projects he could've made much more money.

They're the best, but a sorry disappointment compared to what they could be.


You're not the only person. I have my share of complaints, to be sure, and I do genuinely think their software QA has dropped over the last five or six years -- and their tendency to provide little to no diagnostic information when anything goes wrong is an occasional source of frustration. (I think their hardware is still pretty good, but the utter failure of the butterfly keyboard -- with some of the "why is this thing here" reaction the market has had to the Touch Bar thrown in -- has overshadowed that.) But I still like Apple products, by and large. I like the software that I'm using that's Mac-only. I still find a lot of small little things about the UX to be nicer, for me, than even recent experiences with Windows and various flavors of Linux. I could certainly move to Linux if I felt that I had to, but I suspect I'd spend literally years hitting what I considered to be rough edges.

And, to the OP's point, I've used Migration Assistant a fair amount over the years and it's always worked fine for me, and I don't think I've come across many, if any, stories of it seriously failing before.


I'm with you.

I've had my share of Apple skepticism over the years (and even some unwarranted Apple hate earlier in my life), but these days, I use Apple products because for the most part, they just work.

There are absolutely valid criticisms of Apple and their products, but this is not limited to Apple. It does seem that critics are especially vocal, and this drowns out the happy users who don't necessarily have reasons to go around proclaiming their happiness.

Squeaky wheels and all that...


Better doesn't mean good.

Currently I'm waiting for the beachball of death when trying to get into the boot volume selection menu.

I think this mac is bricked...


Ok, well let me add that I do find it good. I never seem to have the problems people complain about. I don't know why, so I'm always baffled by all the complaints.


You are not alone. Apple products are the best


No. Just some people are trying to use macOS as any OS they have used before that. Instead of using macOS.


I got a M1 Mac mini and had zero issues migrating from a 6 year old macbook pro.


I'm shocked that Apple hardware isn't 100% failure proof, like Dell, HP, Lenova, Surface, etc. I wonder which of those brands has the highest customer satisfaction?


Yeah but with those brands I could factory reset the bios and called it a day, or worst case popped the lid and pushed the nvme ssd back into place.

I don't think it's hardware but a controller firmware issue.


I mean, how do you even manage to break curl...

Depending on where you tried to write to, you might need to grant "Full Disk Access" permission to your Terminal app.


It was failing after ~20%. No permission issue.


I don't doubt Apple's CPUs are great, but they coincide with the Mac's transition into a product I, otherwise, no longer wish to use.

I want access to my boot volume. I don't want cloud anything.

So Apple's technical prowess now makes me sad. I can either use an OS that doesn't head further every year in a direction I hate, or I can have a fast computer.


Are M1 macs faster by significant factor for dev purposes though? I remember reading compiles are slow on MacOs, but don't remember what was the reason.


Many reviews show the M1 chips offering double the compilation speed of intel based macs.


They’re faster than other Macs.


I'm also pretty bummed that I can't use the magic ARM machine yet, but this is just the beginning. Apple currently has an advantage, but I suspect its more of a market advantage than a technical advantage. Snapdragons are getting really fast too, and a lot of the difference in power comes down to a lack of demand. Now that ARM has been shown to be this good, more and similarly powered chips will be made.


Agreed, I looked at the M1 and went out and bought an XPS15. MacOS scares me, wont be long before everything has to be bought from the app store.


Gruber notes midway:

"Perversely, developers, who by nature of their profession best understand exactly what an architecture transition like this entails, might be among the few professions who can’t yet move their primary computing to an Apple Silicon device by nature of the software tools they depend upon. (Some developers can move now, and — because Xcode running on the M1 compiles code so much faster than any Intel MacBook — are rejoicing.)"

I know that HN has many more active/vocal webdevs than native software developers, and so the point may be lost on some. I suspect that Gruber doesn't understand why his point is actually so completely accurate.

Developers (native ones at least) need to compile, a lot. Compilation is an embarrassingly parallel task: if you've got N cores, you can use N cores and get a linear speedup for every expansion in N you are offered.

The M1 seems like an awfully nice system for people concerned about CPU/Watt, and an awfully nice system who want a very powerful bit of hardware to run macOS on.

What it remains is a less-than-top-of-the-line system for people who care about CPU/$ or just maxCPU. No doubt if you're used to building native code on existing MacBook Pro or even a Mini, the M1 systems will seem incredible.

But I'm building my software on a 12 core, 32GBM VM running on a threadripper, while still able to read Hacker News and listening to Soma FM in the Linux/Debian host environment, and for people like me (especially the even tinier set who also are net-zero for electrical supply thanks to solar PV), the M1 systems are just not interesting so far.

But then, as Gruber said, I'm a developer, and this mismatch is caused by the set of tools I use.


I disagree.

I compile a lot. Every day. Almost daily in Java, Objective C, and C++ (Yes, I work actively on all three languages these months).

90% of the time I compile one file (the one I just changed) and then I link. Linking is not parallel and it takes about 4-15 seconds (dep. on the project/lang.).

I use an i9 with 8 cores, but I suspect that a CPU with fewer cores, that go really fast for seconds before throttling would be quite good for most of what I do during the day. Including testing+debugging the executable.

Recompiling everything takes minutes. It happens maybe once per hour, and I often combine the recompile with a coffee-break. So since it happens rarely it doesn't matter if that takes 0.5-5 minutes (dep. on project).

I don't mind waiting for the recompile, but I hate changing a few lines, compiling, waiting for 20 seconds, and then testing. I find that hose medium length waits destroy flow and creativity during the day.

I would definitely prefer fast single-file compilation + linking + testing, over fast full project recompiles.

In summary, I think these M1 devices will actually be great for developers using compiled languages.


It does depend on project scale, and what you're doing with it.

A full compile of my project on my 2950X (16 core threadripper gen 1) takes 4m20s.

If I'm messing around with a new GUI-visible feature, then yes, often it's 1 or 2 files to compile, then a link step (thank you, llc, for parallel linking). Anything can handle that, even if it is C++ :)

But if I'm doing a change to the core headers, it typically means multi-minute wait even on the threadripper. And that's something I've been doing for a couple of months now, working on basic architectural changes. The idea of doing such work on machine that took 50-100% longer to do a full build would be a serious problem.

I don't see anything on the horizon (yet) that uses Apple silicon that helps that kind of development process.


Yes. Agreed. Header files really kill productivity in C++ projects as they grow beyond 1-500.000 lines.

I admit, sometimes I work around or postpone a fix in a vital header file, because I know it will initiate a recompile and undesired wait.

I simply don't understand this isn't properly fixed after 30 or so years. It is so inefficient for developer time, cpu resources, and overall energy usage that it becomes ridiculous sometimes.

I remember moving from Turbo Pascal in the mid 90s over to C and C++ and wondering what the hell was going on...


It would involve somehow being able to identify changes that require compilation from those that don't. This is hard to do in C++.

At least waf (build system) ignores changes to comments :)


This XCode benchmark disagrees: https://github.com/devMEremenko/XcodeBenchmark

The only machine currently beating M1 for compile times is a 28-core Xeon CPU.


Yeah, it's like the transition to ssds. Everyone who didn't had one thought "whats the point" and anyone who made the switch never could go back. These m1 are so fast you can't believe it until you really threw your own workflow against it.

I do frontend stuff and most of the node.js ecosystem is single threaded. Webpack and npm installs are more than 2x as fast on this m1 as on any machine I ever owned. And all this while not even getting warm and battery lasting forever.


> Compilation is an embarrassingly parallel task: if you've got N cores, you can use N cores and get a linear speedup for every expansion in N you are offered.

Somebody should let the Javascript world know that. Both Webpack and TypeScript are single threaded by default and it's a pain in the ass to get them to use more threads. So my 12 core CPU mostly sits around twiddling it's thumbs while I compile code for work.


The M1 is their entry level CPU and it's compiling software as fast as a fast i7 or i9. They don't have very far to go to catch ThreadRippers, 24 Firestorm cores should more than do it.


The current M1 has 4 firestorms. Geekbench gives a score of ~7400 for the M1s. Assuming the firestorms represent some 80% of the CPU power and multiplying by 6 (6x4=24), we would get a Geekbench score around 35000.

That would place it between a 32- and 64-core threadripper.

I know, that's a lot of handwaving!

I must admit, while I have a lot of doubts that Apple can simply scale up to 24+ core M1s while having acceptable yield, what they have going for them is low amount of heat with current core count and clock speed. So I think, they might also be able to increase the clock speed significantly from here.

But, I have been wondering if their plan is instead to create an m1x (4+8 core), and then combine up-to 3-4 m1xs in the Mac Pro and iMac Pro machines to get up-to 24-32 total cores. That way they might still have high yield and only a few lines of chips (keeping the supply chain tight, which seems like Apple / Tim Cook's MO).


I'm interested to see what Apple comes out with for the higher-end MacBook Pros, iMac Pro, and Mac Pro. I expect to see the M1 used in the iMacs, although ...

That could be a single higher-end chip, with possible multi-socket support for the Mac Pro? Or it could be two chips: one for the iMac and laptops, and a beefier one for the Pro desktops?

Either way, they absolutely need to compete with Zen3 on highly parallel loads. The co-processors (GPU, NPU, etc) are an advantage for general usage, but for many devs, basic CPU core counts and high I/O speeds are the winner.

I hope they come up with something that's equivalently good as the M1 is as a low-end laptop chip.


Laptops are almost always going to weight CPU/Watt fairly strongly.

As a developer, I don't give a damn about CPU/Watt, I care only about maxCPU, for a given budget.

I don't know why any company would build a laptop that focused on maxCPU. Gaming laptops are perhaps the exception, but Apple has historically not really participated in this subset.

>for many devs, basic CPU core counts and high I/O speeds are the winner.

bingo!


You're talking a lot about different types of devs, but it sounds to me like you are just "not a laptop user".

That's fine. But it really has nothing to do with what type of dev you are. Many more than just webdevs are happy to compile code on a laptop.

So yes, these laptops and intro level desktop are not for you.


I have a Lenovo Y700, which is a CPU beast of a laptop (built mostly for gamers, I believe). It's powerful enough to make working on my project feasible, and indeed when travelling, it is what I use, reasonably happily.

But most laptops just can't compile 800k lines of C++ fast enough to make them practical as a development system for that type of software, so the "type of dev" I am really does have something to do with it.

And yes, the new Apple laptops are not for me, which was my whole precise point, and (I believe) Gruber's point in that quoted text, however unintentional.

I'd take one of the Minis though, just to see how fast it really is :)


Fair enough, but it's probably better to compare the M1 to the CPU in your laptop then, no? Not the threadripper in your desktop. And in that case maybe the M1 IS for you, since I'd be willing to bet it outperforms your existing laptop. Assuming I've found the right computer, my existing (non-M1) Macbook Pro is beefier than your Lenovo, and it gets outperformed by the M1 too[1].

Just seems pretty disingenuous tbh. The argument that you need a seriously beefy threadripper machine to compile your code because you're an 800k LOC C++ developer falls a bit flat when you tell us later that you also compile code on your Lenovo Y700.

[1] https://browser.geekbench.com/v5/cpu/compare/5150606?baselin...


> Fair enough, but it's probably better to compare the M1 to the CPU in your laptop then, no? Not the threadripper in your desktop.

Although I appreciate the ability as a software developer to "work anywhere", if I was restricted to working only the Y700 I can guarantee you that certain development avenues would happen very differently, if at all. The laptop works fine as a machine to use "when I have to", but if I'm doing "deep" development, it's a PITA and makes me lose focus too much because of the delays in the edit-compile-test cycle.

I have absolutely no doubt that the M1 macbooks will easily outperform the Y700, along with most other computers I've ever owned.

It remains true, for now, that if you actually care about maxCPU (with a linearly parallelizable workload) (which I do for at least half of my development work), the current M1 machines are not the top of the line. I would tend to agree with anyone who suggests that this will likely change in the near future.


Ok, but again, this has really nothing to do with what kind of developer you are, it has to do with the fact that you are apparently not in the market for a new laptop.

Because if you were, the M1 laptops are the best in the market for the linearly parallelizable workloads you are talking about.


As a developer, I find long battery life, while still being able to work, to be quite nice. Power per watt is how this want is accomplished by laptop and OEM parts manufacturers.


There are no honest 8GB reviews. Gruber is selling 8GB Macs based on 16GB review units. This is the iPad 1 with 256MB all over again.


The justification that 16GB is "enough" from the Apple blogger clique like Gruber is ridiculous. Even if it is "enough" it's still what my MacBook Pro I bought 6 years ago shipped with for several hundred dollars less than this one.

Why are we giving Apple a pass on RAM being stuck at 8GB and SSD being stuck at 128GB (only very recently bumping to 256) for almost a decade. Not to mention some of us get stuck on 8GB 128GB machines because our workplaces only buy the base "Pro" machines, refusing the drop the extra few hundred to make it what Gruber considers the base level.

SSDs are no longer expensive, go on Amazon and see how much 1TB M2s cost these days.


If you need more than 16GB just... buy one of the other MacBook Pros? I really don't see what the problem is.


Some people wish they could get the performance of M1 and 32GB RAM and four Thunderbolt ports. They will get it eventually but it's a shame it's not available yet.


For the vast majority of users (who are not developers or doing production video work, etc), 16 GB is plenty. That's why.


"Pro" machine isn't for the "vast majority of users" it's for people who are developers and do video production work.


That's a surprising specific definition of the term you've come up with there.


According to whom?


I've always gone cheap for personal use and the last 3 MacBooks Pros I've purchased were i3/i5 8GB models, and never have had perf/memory issues (even with 2 users logged in)

Work provides i7/i9 with maxed out RAM and honestly don't see a night and day difference in performance.


It’s enough for many of the people who actually buy Macs.


I wish Gruber would just let modest criticisms like this stuff fly rather than pen an overwrought argument like this.

I think it’s perfectly legitimate to raise concerns around Apple software because it’s one area where Apple has faced sustained criticism from its users and has done very little to remedy the situation. So if the software that runs on top of the M1 is shakey, it’s entirely realistic to expect these “warts” to be with the platform for a while.


Eh, I get his coming down on criticisms this time. It doesn't make sense to criticize the hardware for software issues. I'm not going copy a binary compiled for x86 over to a RPi and complain about ARM chips being inadequate.

If there are software issues, address them as such.


But when you purchase an Apple product you're not buying one or the other: You're buying the complete package. They are one.


I think a lot of his frustration seems to come from people using Apple software to write off ARM hardware overall. In the M1 specifically the software may hold it back, but using that to justify claims of x86 systems being fundamentally superior.


Software can—and will—be updated. If the problem is with the M1, it's going to be a problem for the life of the laptop.


> It was a fundamental trade-off inherent to PC computing, and now we don’t have to make it

I wonder how many of the currently accepted trade-offs are actually false.

At low levels it's kind of easy to prove, like you can store either more or bigger packages in a truck but not both. But the higher you go, the farther from physics, the more difficult it becomes to know if you're making decisions on what is essentially a flaky trade-off.


I want a "hot" M1, one that runs at 5Ghz and burns 120W (or whatever) at load because it will be even faster. That is how this works, the reason your seeing "fast and cool" is because the target is 15W. If that becomes 25W then there is budget for more cores, or faster clocks (or whatever) and the the result is a faster machine. Which is why a lot of people are holding out for the M2 (again or whatever) the next version they put in the bigger machines because it should be even more impressive if apple can scale it.


I guess that is what is coming: M2 will be 40% faster, still be 10W but come in 1x for Air and 2x configuration for MBP 16 and desktop Macs. I think this is just the beginning of a growing wave lapping the x86 space.


I have a four year old MacBook that I am still happy with, but when LispWorks supports M1, I think that I will get the small M1 MacBook Pro - I can live with an extra pound of weight.

I use my iPad Pro for just about everything except programming, so laptop selection is not as important to me as it used to be.


I think the missing part of the discussion is that the M1 can run fast and hot... It hasn't transcended physics - it's just not being pushed hard enough.

Give it a 800% overclock and dunk it in liquid nitrogen and then we'll talk.


It will conk out at rather low clocks as the design is not using as many high frequency tricks such as local clocks, instead going for high efficiency.


No device this is going in has a large TDP budget. A unit tuned for a desktop heat sink will put in some interesting numbers. I'm curious to see when they put out a Mac Pro with an M-series chip. Or if server is ever coming back.


The distinction here is between "fast" and "fast_er_". It isn't that it can't run faster and hot (because of course it could) .. but that it's able to achieve fast performance relative to the competition without having to get "hot".


    for i in 1 2 3 4 5 6 7 8; do

        cat /dev/urandom > /dev/null &

    done


Makes me wonder what's going to happen when we have ARM-based desktop computers and gaming rigs.


Maybe we'll finally be able to run Crysis at max settings


Fast and hot and slow and cool is still valid. The M1 is as fast or faster as other CPU but could surely run faster. But then it's getting hot.


IDK how it actually went down at Apple HQ but I can imagine. They are so, so keen on sleek, quiet machines that it tops all other concerns. When the hardware engineers said "more performance is going to require a fan or two" the leadership said "nah, let's design and fab our own CPUs".


More likely that they realized their iPad chips were faster than their Intel notebooks, and really had no choice but to move to their own chips on the Mac.


I have to imagine this is a joke or do you actually think the only advantage of the M1 is that you need 1 less fan in your machine?


Joke with a kernel of truth. I'm not saying they didn't do a great job with the M1. I'm just saying it's an unusual direction for their business to take unless it was initiated to serve their core strategy of delivering visually appealing devices with nice software.


Well, and who's laughing now? :)


I applause all the fellow misophoniacs in cupertino! No constant or repeating sound is too silent to be ignored.


Exactly: Apple's own slide in the article demonstrates this.


I feel like the M1 hype is getting out of hand. Apple bought up TSMC’s entire 5nm manufacturing capacity so of course it’s going to pack more transistors and use less power than AMD’s 7nm TSMC process or Intel’s 10nm process (which is effectively similar to 7nm TSMC).

The M1 is a great and well-refined design but when compared to AMD’s similarly priced 4750U in multi core benchmarks, the performance per watt is better but not by much more than the improved process would suggest. And that’s without the IPC improvements that mobile Zen 3 will bring.

And when compared to last gen 14nm Intel MacBooks, a 5nm TSMC part is going to blow the doors off just by virtue of feature size alone.


It very well may be over hyped but with the breakdown of Dennard scaling[1] the gains we've seen in CPUs has been pretty minor generation to generation.

I think that could also be the reason it is getting so hyped, it is a tangible improvement over other CPUs. 50% more battery life, better performance in some case, what's the last CPU generation we saw such gains? Probably pretty far back to when Dennard sclaing was still a thing?

[1] https://en.wikipedia.org/wiki/Dennard_scaling


But AMD did just that with the Ryzen 4000 series.

I'm on a 3 month wait list right now to get an 8 core/16 thread AMD Thinkpad. Nobody else currently produces a sub-15W part with that much computational power.

The major difference vs the M1 is that AMD's micro-PCB design scales in core count rather than producing a single dense power-efficient die.


> I'm on a 3 month wait list right now to get an 8 core/16 thread AMD Thinkpad. Nobody else currently produces a sub-15W part with that much computational power.

This is a big part of the reason that the AMD mobile chip comparisons seem to be kinda besides the point for me. For a bunch of reasons, its almost impossible to get a high quality laptop today w/ the latest and greatest AMD chips that people love to compare to the M1 chips. I know AMD makes great chips, but what does that matter if you can't get buy them in a laptop today?

FWIW - I love AMD, I have a Ryzen desktop for gaming. Its fantastic the competition they are bringing in the desktop market. But I don't want to play the game of searching far and wide for high quality laptops that have the new Ryzen chips _and_ are actually in stock somewhere in a configuration I want _and_ are from a reputable manufacturer.

I was able to pickup a M1 Air on release date w/ the upgraded specs from my local Apple store. It's a glorious machine, and I knew the build quality and hardware is going to be top notch.

OEMs still largely prefer Intel for laptops for whatever reason, which sucks, and it seriously hampers the ability for AMD to compete in this market.


>OEMs still largely prefer Intel for laptops for whatever reason, which sucks, and it seriously hampers the ability for AMD to compete in this market.

OEMs might prefer Intel because, unlike AMD, Intel is actually able to supply the CPUs to go in their notebooks. A slow CPU is better than no CPU. My understanding is that AMD is mainly focused on supplying the gaming console makers, so I don't know that the situation will improve over the next few years for laptop manufactures seeking AMD chips.


Sure, as I said, I think there are many reasons that AMD has largely not been able to make a dent in the laptop market.

I do find it amusing the continual retorts to M1 performance of "but wait until the Ryzen 4800/4900 laptops are out in force!" as if that were something that is realistic at all in the next year. AMD can't even keep up w/ demand for its graphics cards and console chips right now, unfortunately.


We have seen massive improvements in power envelope in CPUs in fairly recent generations, to say nothing of the power improvements seen in some generation->generation GPU improvements.

For example, with an improvement to the 14nm process and not a die shrink, Kaby Lake->Whiskey Lake saw low power processors double in core counts (2C/4T->4C/8T) in essentially the same power/thermal envelope. Basically every thin laptop/ultrabook family doubled its core count.


Well... yeah... but they also dropped clock rate from 2.5GHz-ish to 1.7GHz-ish. That could equally well explain the increase in core count at the same TDP. You're gaining about 15% IPC improvement from Kaby->Whiskey [1].

It's an overall improvement, but not as dramatic as "2x cores for 2x perf at the same power"

[1] https://www.anandtech.com/show/14514/examining-intels-ice-la...


"Base" clock dropped, but boost clocks remained fairly high. In practice performance gains were quite good (except, notably, for that time when Apple used old power control firmware and had 6C/12T processors underperform their 4C/8T predecessors).

The point of this is that the significant improvement from Kaby Lake to Whiskey Lake involved only small architectural refinement and updates to an existing process, so much larger performance/power improvements should absolutely be expected from an entirely new process plus architecture refinement.


AMDs current gen 4800h/u is still competitive with the air cooled M1(mac mini) in both perf and performance.

M1 has better single core perf, but the AMD 4800 edges it out in multi-core perf on Geekbench(apparently) and smokes it in multi-core Cinebench.

https://www.cpu-monkey.com/en/cpu-apple_m1-1804 https://www.cpu-monkey.com/en/cpu-amd_ryzen_7_4800u-1142


Not a huge surprise that 8 perf cores > 4 perf + 4 efficiency cores.


> Apple bought up TSMC’s entire 5nm manufacturing capacity

... until the end of 2020.


A lot of the arguments here seem to come from people thinking laptops are the only game in town. When someone makes unqualified claims about the m1 beating "all the other PC cpus" of course people are going to compare it to desktop chips. Even Apple themselves advertised it as the "worlds fastest CPU core" when they announced it, which is blatantly untrue.

People aren't saying that this isn't a very compelling CPU, they're just annoyed by the bullshit marketing, and the people blindly repeating it.


> Even Apple themselves advertised it as the "worlds fastest CPU core" when they announced it, which is blatantly untrue.

Is it? In Geekbench the M1 was outperforming the 5950X in single-core.

Source: https://www.reddit.com/r/Amd/comments/jspnhp/apple_new_m1_ch...

EDIT: Looks like that post is comparing 2 different Geekbench versions. Newer Geekbench versions have optimizations for Zen 3, putting it back at the top.


Highest Geekbench 5 single core score is apparently 2239 on a 5800x (https://browser.geekbench.com/v5/cpu/singlecore?page=1)

Highest m1 single core score is 1716 (https://browser.geekbench.com/v5/cpu/search?q=Apple+M1)

(Of course they're likely overclocked past factory settings, but if we ignore those results that's not the fastest CPU core in the world, is it?)

Geekbench has also not traditionally been a benchmark used for x86 desktop chips, it's more commonly used to compare android and apple phones.


> Geekbench has also not traditionally been a benchmark used for x86 desktop chips, it's more commonly used to compare android and apple phones.

This isn’t really true. Geekbench started out as a desktop (Mac) app. Originally it was calibrated so that Power Mac G5 would get a score of 1000. They’ve recalibrated since then and added mobile versions, but it has its roots in desktop benchmarking.


2239 for a Ryzen 5800x single core isn’t just overclocked, that’s like 6GHz LN2 overclocked. Looks dubious to me.


The M1 changed the game. Measuring single core on an M1 may be counter productive. E.g. javascript runs on 2 to 3 cores at once even though it's single threaded. Thanks to running RISC instructions of a single process in parallel on multiple cores. Thats why javascript on the M1 is twice as fast as on Intel, even though the M1 just runs at 3Ghz.


So on my macbook pro 8gb m1, everything node.js related (thats what I work with, node.js compiled to arm via `npm install v15`) is more then 2x as fast as on ryzen 3900x PC, and even faster as on my i7 16" 32gb 2018 macbook pro. Example: My angular project compiles in 65 seconds on the intel macbook - while fans screaming in terror. And in 30 seconds in the m1 macbook - not even getting warm.

So if Intel came with a new CPU that said all my workflows are now 120% faster and power consumption dropped 4x because of a breakthrough (and not 4% year over year like the last 10 years), would you believe it?

But because Apple did it, it is "blatantly untrue"?


`nvm install v15`, not npm :D


> because iPhones and iPads run fast and cool

For certain values of "cool", for certain tasks.


Well, where "hot" means "needs a fan".

iPhones and iPads sure can run quite warm when charging and doing intensive things on them.


A vast understatement, IMO. I've gotten my iPad pro hot enough you couldn't hold it in your hands (well, I couldn't) while running graphics intensive hardware.


The laptops have a lot more surface to dissipate heat than a phone, and less likely to operate wrapped in an insulating case. Also, that some models have active cooling can make a difference here.


TBH, I think that most experts who didn't see this coming were simply in denial.

Disqualifying benchmarks because... it's mobile?

Disregarding performance and cooling differences with an iPad because... it's an iPad?

They all sound like excuses to disregard the obvious results. Synthetic benchmarks can lack fine grained precision. They might be bad at determining who's on top when there's a very small gap. But they provide valid results for determining if two products are in the same LEAGUE.

Sure, I an iPad could't run Linux, gcc or StarCraft,, but that's mostly an ecosystem issue, not a hardware performance one.

We've seen this coming for years now. Both the big obvious signs, and the little signs, like Apple continuously unifying code and APIs across platforms.


> We knew this to be true because that was the way things were. But now, with the M1 Macs, it’s not. M1 Macs run very fast and do so while remaining very cool and lasting mind-bogglingly long on battery. It was a fundamental trade-off inherent to PC computing, and now we don’t have to make it.

What a pile of horseshit. The tradeoff between heat and performance was always within a generation - "fast" is always relative. It's just the baseline that has moved, same as it has with every Intel/AMD generation, not a fundamental change.


Windows Bootcamp can't run on M1. So there you go there is a wart that all the 100% positive reviews aren't mentioning.


I'm now hoping to be surprised by a similar disruption in web browsers! (although I'm not holding my breath...)


Since this annoying article is really all about intel... I wonder if anyone could answer me this:

How practical would it be for Intel to pivot away from the deficiencies of their ISA, e.g to be able to make improvements in OoO comparable to the M1?

It's well understood such a move would completely remove the advantage of high backward compatibility for intel - but assuming the market is open to such a change - how technically feasible would it be, e.g is the micro-architecture and ISA more tightly coupled than one might expect? does such an endeavor mean scrapping most of it, or is a significant portion of the technology under that ISA layer flexible enough to be adapted without ending up with the same limitations in current intel chips?


The M1 sounds great and the fundamental changes in the architecture makes so much sense that you wonder why Intel and AMD didn't think of it.

However, only macos can run on the M1 because these fundamental changes require a huge OS overhaul. Yes, a lot of work like a hypervisor type layer must have been put in so that existing 3rd party apps can run. But the M1 is optimized to Apple's common denominator, not anyone else's.

Ultimately, the real question is whether the M1 will be opened for Windows or Linux to build on. I doubt it.

Intel and AMD will just have to build their own heterogeneous processor and Windows and Linux will have to build a hypervisor layer or just go native.


> emulating or translating apps compiled for a different architecture is necessarily going to be irritatingly slow and somewhat incompatible at best

While it wasn't as popular, Palm OS did this beautifully when going from Motorola 68k based device to ARM. Granted the executables are tiny (usually 10 to 64k in size or smaller), but the execution was flawless. In this comparison, I am not nitpicking on emulation vs translation vs JIT. Whatever it is, the 68k binary ran on ARM without a recompilation.


I'll wait, unfortunately.

I need docker today.

I need my imagePROGRAF printer drivers for photography today.

I need Luminar AI plugin for Photoshop today.

Etc.

But I am excited to pay for v2 of a MacBook Pro 14. So maybe 18 months. It will be a long wait... :/


I hear a lot of "all PC laptops are now comparatively a bad deal", which is a real bummer since it's always good for Apple to have competition. If Apple is reserving all of TSMC's 5nm (and probably 3nm) node output, is Samsung a viable competitor?

Does the world of ultra-cheap (i.e. <$500) laptops come back to ARM Chromebooks? Qualcomm be able to mimic Apple's design success?

I'm kind of with Gruber that this is a game changer, but what does it mean for the ecosystem?


Competition. And I'm excited.


I mostly work on Android apps (and ocasionally backend). Just started learning iOS dev.

I think overall the responses from software developers about M1 is positive. Yes, I also have read steipete's post: https://steipete.com/posts/apple-silicon-m1-a-developer-pers...

Perhaps next year the ARM port of Android Studio is already available? :D


One of my best friends was high up in the M1 project (don't want to get too specific, but he works at Apple).

I am not surprised about the M1 performance numbers at all. Granted, I have a degree in this area (silicon design) and many friends in the industry. But it's the sort of thing where if you're paying any attention, it's just inevitable.

(1) Intel has been a mess for a while. Multiple canceled high-profile projects (not announced publicly), some high-profile process technology missteps, and frankly...they just feel kind of rudderless. A lot of my more career-minded friends have jumped ship. There just doesn't seem to be a concrete goal they're pursuing. They've had what, three CEOs in the last few years?

(2) Apple's semiconductor team is really good. REALLY good. It was a Steve Jobs-level initiative that started with the acquisition of PA Semi and they've assembled a lot of the best people in the industry.

(3) Apple sells something like 10-100x as many phones as computers each year. This means there's much, much more scope for high-budget R&D on the phones. And in this case, they took a lot of what they learned about system-on-chips, which are used in phones, and brought it back to PCs.

(4) It's pretty obvious the architecture of PCs was overdue for a bit of a rethink. Just look at a PC mainboard. There's all kinds of shit on there, lots of clocked digital electronics, BIOS chips, memory controllers, real-time timers, a giant energy-eating PCI bus, etc. If you stop and think about it, we've probably reached the point where the whole thing needed to be repackaged into a single part. With the exception of overclockers and hardcore gamers, most people don't upgrade their CPU, or memory, so right there, you can remove a bunch of clunky edge connectors and all their bus logic. This means the whole thing can run on fewer synchronous clocks, which dramatically improves power efficiency and also (I'm not sure about this, but it stands to reason) performance. There's also a completely unified memory model--one giant flat RAM between CPU and GPU--so no schlepping all the things back and forth from GPU to CPU memory. Huge performance improvement right there, just by eliminating stupid legacy bullshit we don't need anymore.

As a software guy, it's like they just ripped out a bunch of legacy stuff and got rid of all the unnecessary and complex silicon.

No less important, there was a movement in academia about ten years ago toward using FPGAs (basically reprogrammable silicon) to do various special-purpose tasks like image processing, DSP, and GPU. Custom silicon is way more efficient (both power and performance) than general-purpose CPU silicon. If you look at the M1 design, they bundled a bunch of custom stuff into a single package, so that rather than having a CPU, you get a CPU, memory controller, GPU, image processor, etc. on a single piece of silicon.

(5) And finally, they did a bunch of sensible things with the ARM CPU that anyone with an undergraduate degree in computer engineering wouldn't be surprised by. I read something about large reodering buffers and a few other things.

I'm not in any way minimizing this achievement. It is a very big deal. But like a lot of what Apple does, it's just a realtively straightforward idea, executed to a very, very high level of excellence by a very good team. And it's interesting how all these little 5% improvements here, there, and everywhere culminated into a much larger, qualitative degree of excellence. It all feels very "Apple".


This is all true. You look at at a new PC and compare to a 10 year old one and its virtually the same, even in performance.


The only real (but small) downside (and just for some of us) is that you can't daisy chain two external monitors to the new M1 laptops. It's true that the mini can use two external monitors connected to separate ports or that you can jump through hoops and use more than two external monitors and that is why it's not such a big deal.


This Gruber character is something else. No, the rules of physics and heat generation have not been repealed. You would think the fact you can’t crank out 100% performance on a Mac book air for too long before it throttles would be evidence enough that heat dissipation was still the prevailing reality, but not so for Gruber!


What's crazy about the M1 is that you can run the MacBook Air at 100% for a little under 10 minutes before it throttles. That's a breakthrough compared to Intel chips.


Can you be a bit more specific about what section of the article you're referring to? The claim I read is that it runs fast and cool, as perceived by touching the outside case. I didn't find any claim that no heat was generated at all and didn't read anything about throttling in this article.


Can you be a bit more specific? I didn't find any claim "no heat was generated" wasn't stated anywhere. The first paragraph talks about "fast computers should run hot." In an article titled "truth and truthiness", it should be commended for "truth in labeling" because vague usage of "hot and fast" is one way to make the truth difficult to pin down.


This is reasonable enough but I’m not sure how it relates to the parent comment which brings up the possibility of laws of physics being repealed. A more complete paraphrase of the first paragraph is, people believed fast computers should run hot. And that certainly matches my experience to date.


The Air throttled is still faster than a majority of laptops out there. And fanless.


It's just not true. Ryzen laptops have been available for months for a fraction of the price of a macbook that are just as impressive. https://imagine27.com/rise-of-the-mac-serial-killers


By just as impressive you mean 40% slower based on these benchmarks.


And also 50% of the battery life.


A naïve question: Couldn't other ARM licensees just integrate RAM into the SoC like Apple did and come easily close to M1' performance? With easily I mean in a doable manner within 1-2 years. I know there are more tricks/magic involved but just to start somewhere.


The biggest wart with the M1 Macs is that you're stuck running MacOS, which just feels like molasses every time I use it.

https://www.phoronix.com/scan.php?page=article&item=macos101...

It's the same problem with A14 iPhones. You're stuck running iOS, which is not only too restricted to be of any use, but also too slow to showcase the hardware.

https://www.youtube.com/watch?v=emPiTZHdP88

https://youtu.be/hPhkPXVxISY

https://youtu.be/B5ZT9z9Bt4M


This article covers the fact that the M1 is both cool and very fast. My question is: Could Apple now clock up the M1 so that's it's as hot as other chips, but insanely fast?


This is good. We have a problem of accountability in our media. Spewing nonsense for years should come with consequences even if it gets you clicks.


TLDR: Author believes (perhaps rightfully) that the M1 review from perenial Apple skeptic Patrick Moorhead is wrong.


Can someone please ELI5 why the M1 is so much faster than every other ARM chip?


1. They took all the different chips and crammed more of them onto one chip. Now the different parts of the computer can talk to each other faster without any more work.

2. The new chip does lots of little things waaaay faster than the old chip does a handful of big things.

3. Apple figured out how to do a lot more things before they need to. Like if you did all of the math problems in your textbook ahead of time and then when the teacher eventually gives you the assignment you hand in the ones they asked for.


The processor looks at the future instructions it will run, can see that some things can be run ahead of time, and can run multiple multiple instructions in parallel (on the same core). You could run individual multiplications in parallel. Most of those things other ARM processors can do that too, but M1 is bigger in some of those sizes, like how many instructions ahead it looks at, how many things can run at once etc.


Which other ARM chips are you comparing with? It’s not much faster than A14, Apple’s other ARM chip. Which in turn is around 20-30% faster than A13, etc. Apple has been making their ARM chips faster very gradually over the last 8 years. M1 is just the first one they’ve put in a computer and more people are paying attention to their speed.


Far from "like I'm 5", but here is a technical explanation: https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...


An excellent article.


Isn't it because it is a SoC?

I found the following article enlightening:

https://screenrant.com/apple-silicon-m1-mac-macbook-chip-soc...

Edit: ignore that previous link, way better is the one mentioned by another poster:

https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...


M1 is fast but also very much completely closed, impossible to extend / change anything, on my PC I can buy more ram, I can buy a new graphic card every 2-3 years, it's a very different constrains than AMD / Intel supporting a million of different configuration + legacy.


I get the "fast and hot" v "slow and cold" history of devices, and am pleased to be told that the M1 has broken this. However, and it may well be that I haven't understood complexities here, a recent Ars review [1] (glowing, by the way) of the M1 MacBook Air, contained this passage:

According to Apple, the MacBook Air's M1 is voltage-limited in order to function within the fanless design's thermal envelope. iFixit's teardown shows in detail that the Air's M1 cooling setup is an entirely passive affair, with just a heat transfer plate in between the M1 CPU and the aluminum body. I was expecting performance similar to but perhaps a bit lower than the M1-powered Mac mini, and that's more or less what I got. However, the Air's M1 is good for at least a few solid minutes of full-bore Firestorm core performance before it throttles back.

In benchmarking, I noticed that subsequent runs of the Final Cut Pro export would slow down dramatically—the first export would complete in about 1 minute and 19 seconds, but if I immediately repeated the export it would take a bit under 2.5 minutes—and the Air would be quite warm to the touch. After closing the lid to hibernate until the Air was cool and then repeating the export, the time was once again in the 1:20-ish range.

To create some more sustained load, I cloned the source video three times and then repeated the export process. Starting from a cold startup with the MBA's chassis at ambient temperature gave a result of 4 minutes, 21 seconds. This time, I opened Activity Monitor's CPU graph to spy on the core utilization. All eight cores were engaged until about 2:56, at which time half of the cores—presumably the high-performance Firestorm cores—dropped to less than 50-percent usage and stayed there until the run completed.

A second run immediately after that took 7:37—not quite twice as long, but heading in that direction. Activity Monitor's CPU usage graph showed half of the cores (presumably the high-performance Firestorm cores) at half utilization for the entire run.

Further testing—including several runs after letting the MBA sit powered off for about an hour to make absolutely sure it was cooled to ambient—failed to produce anything resembling a precise, repeatable time interval for when throttling starts. The best I can do is to say that it seems that when you throw a heavy workload at the MBA, it runs at full-bore until the Firestorm cores become too toasty, which seems to take anywhere from 3-ish to 6-ish minutes. Then it backs the Firestorm cores off until they show about 50-percent utilization, and the amount of heat generated at that level seems to be within the sustained thermal capacity of the design.

My amateur reading of this was that, yes, the M1 is incredibly fast when cold, but when it warms up it becomes measurably slower because of throttling - even if still fast by comparison to previous Air models.

[1] https://arstechnica.com/gadgets/2020/11/apples-m1-macbook-ai...


> M1 Macs embarrass all other PCs...

Sure, if you can bear living in a prison complex that has shitty window management and a petty tyrant of a warden.


I am willing to bet if mods did some data munging they would find sockpuppetry of a hail corporate type at work, because if hn is really this brainwashed about macs it makes me sad.


Apple has always operated with a more closed-off ecosystem that sacrifices openness for tighter control on the user experience, and that's always been at odds with the "I want access to everything so I can tinker" mindset on HN. I expect any HN discussion about Apple to be a mess, because the average HN commenter/reader wants something very different than what Apple sells.


I think something like 80% of Silicon Valley tech workers are using Macs and tons of them hang out on HN. I have always seen the HN crowd as being mostly pro-Apple.

Maybe I'm wrong though - it might depend on the thread. Whenever I put out an anti-Apple sentiment on a pro-Apple article thread I usually get massive amounts of disagreement.

My own dislike for Apple goes way beyond their protectionist policies. Maybe if I didn't hate the Mac UI with that hideous global menu bar and horrid window management, I could be more forgiving towards their politics. Heck, I used Windows as my main OS for decades until I finally switched to Manjaro and XFCE for the sheer convenience of running Docker and npm/yarn smoothly without all the Windows cruft slowing me down all the time.

It's funny though - I do less tinkering on my Linux desktop than I ever did on Windows (or my Mac), since everything just works the way I want it to in Manjaro. I think my first week using it, I had to figure out how to write an xdotools script to get window snapping in XFCE. Never had to touch it since. My Mac has broken things like Karabiner, Better Touch Tool, Little Snitch and so forth after many updates - so much so that I just stopped reinstall them. Luckily, I only have to use it once in a while for doing the occasional Mac/iOS thing.


I'd like to read this article but the text/background colour combo is headache inducing after a few minutes.. anyone else ?


Doesn't bother me, but if you're using Safari, try switching to Reader view.


Too bad their OS is unusable...


I submitted the same exact link 8 hours earlier. This HN behavior is very frustrating.

https://news.ycombinator.com/item?id=25285489


Yes, sorry—I know it's frustrating when a later submission 'wins'.

The reason we allow reposts of articles that haven't had significant attention yet is that it's largely random what happens to get noticed on /newest, and we want to allow good articles multiple rolls of the dice. The priority is on the article, not who submitted it. That does make it a bit of a lottery, and it's winner-take-all. There are definitely better solutions possible, and it's on my list to work on that someday. We'll expand how karma is reward for submitting an article (HN could use some karma inflation anyhow) and find some way of grouping related submissions together.

In the meantime, the lottery does at least even out in the long run if you keep posting good submissions.


Are people really harvesting links on /newest? You guys should know better of course, but I always thought what drove an article to the top was multiple people submitting the same link. That was my reasoning for a 24h window defining a submission.

As for karma inflation, it's an interesting concept, but I don't know if I'd like that :)


I don't know what you mean by harvesting links on /newest. Can you explain?


Sorry, I meant that I for one don’t go to /newest often.

And I didn’t imagine it was what drove most links to the top of the homepage. I always thought that many people submitting the same URLs was more influential.


To me, this is a community of people that want to share interesting topics and information, and contribute to that sharing and discussion. The greatest prize of all this is how much accurate and useful information we all get out of this. I've become a veritable encyclopedia, a proficient and effective software engineer, a resource on many topics for friends and family, and continue to be an enthusiast of computing software and hardware advances.

There's no denying the part of our brain that gets a kick out of the internet points which reflect peer approval and respect. However, I will continue to hope to value the greater good that this community creates for all of us over the individual recognition that gives me a much shorter term buzz.


Wholeheartedly agree and couldn’t have said it better myself.

Aside from the peer approval that you mentioned, I also enjoy the idea that I can somewhat steer the conversation and see what smart people have to say on the topics I’m most interested in.


Timing of HN submissions seems to matter a lot. And a little luck.


There should at least be a 24h window separating submissions of the same link.


Why? Is it more important for the "first person" to post it to get more internet points, or for more people to actually see the link?


Both can happen at the same time if submissions within a window of 24h count as points to the original one.

Try to submit this same link now and that's what will happen. This feature already exists, it's just that it seems too narrow of a window if it's only a few hours long.


Is there a timing infographic? Or luck chart, for that matter?


I think the trick is to post when west-coast America wakes up, more or less.


That's my impression too. The previous post currently says 11 hours ago, so by my count that's midnight NYC, 5am London. I know Australia exists, but overall that just gives it most the night to slide down the page before the majority wake up.


I feel like you might be unclear on the definition of luck.


Whenever I've tried to submit an already submitted link, it takes me to that discussion on HN, so I wonder what people do to circumvent this


Apple propaganda and not worth your time reading.

We got an M1 on pre-order and so far it seems pretty half baked. Some apps work reliably, others outright don't. I'd wait another year at least for the kinks to be worked out.

Some notable apps that either dont work or break on use:

Anything from Adobe, Google sync, MS Office.


> M1 Macs embarrass all other PCs — all Intel-based Macs, including automobile-priced Mac Pros, and every single machine running Windows or Linux.

Nah, it doesn't. Source: I've got all of them


Frankly I don't care that much about entry level macbook's absolute performance. But the utility of 20 hours 4k video play(3rd party tests even exceeded this number) can't be understated.


You sure watch a ton of movies.


While it's not super relevant (for countries with high covid cases) now, not needing to worry about charging in the car, at the airport, on a plane -- those are real wins. Traveling for the weekend and forgetting your charger and... being fine.

Have you ever had your laptop on a train or bus and then found out the outlets aren't powered? It's stressful!


Look up truthiness


Apple is a company that really cares about their product but not their customers. Very frustrating.


I don't understand the comparison to the iphone and android situation. As far as I can see, android phones are highly competitive with iphone on everything. The longest battery isn't on an iphone, the best camera isn't on an iphone, the latest screen tech isn't on an iphone, etc. etc.

Coming to the M1 laptops, they beat comparable Windows laptops in every single metric. It's literally just a matter of looking at the numbers and realizing how M1 has every Intel chip beat (in this category).

In fact this makes me wonder: if Apple chips are so far ahead, why are iphones not the fastest, best smartphones by a wide margin?


> In fact this makes me wonder: if Apple chips are so far ahead, why are iphones not the fastest, best smartphones by a wide margin?

Apple has pretty consistently had the fastest SoCs for phones and tablets for a few years now, particularly in single core performance. Here are some benchmark results of the iPhone 12 versus some of it’s Android competitors [1]. Notice that last year’s iPhone 11 Pro Max still outperforms Android phones from this year as well.

I think part of the reason this isn’t a major differentiator (for phones at least) is that phones have been “fast enough” for several years now. Given the very aggressive throttling that is utilized on these types of devices, I think the primary difference ends up being battery life / efficiency rather than raw performance.

It’s also worth noting that while the performance / efficiency gap between x86 and ARM seems to be pretty sizable at this point, the gap between various current generation ARM CPUs isn’t nearly as drastic.

1: https://www.tomsguide.com/news/iphone-12-benchmarks-this-des...


> It’s also worth noting that while the performance / efficiency gap between x86 and ARM seems to be pretty sizable at this point

It's really not, though. You have to compare performance at a given power target to judge performance/efficiency, which you can't do for most ARM CPUs. But you can get 8 core x86 CPUs that are 15W, or 1.8w per core (example: the 4800U). Similarly 64 core Epycs are around or less than 3W per core. These are all well within similar per-core power numbers as your typical big-core ARM chip.

The perceived gap is larger than it really is, because there just isn't really an x86 tablet-focused market. The M1's jump into laptop-space puts it up and we can really see it shine, but the other ARM jumps into laptop-space were a joke. So comparing equivalent power targets becomes challenging.

But take your articles benchmarks for example. The Snapdragon manages something like 60% of the performance of the A12's. So take the existing M1 mac, cut the performance drastically, and suddenly it'd be not interesting at all. The performance/efficiency wouldn't be impressive at all if it was getting 60% of the performance it is now.


Yeah that's what puzzles me, why has chip superiority not translated into an objectively better phone? For laptops the M1 laptops are best in class in everything, battery and performance.


To a large extent, current Android flagship CPUs are "good enough" for the things people do with phones. Sure, current iPhone chips are much faster, but it doesn't matter that much for the vast majority of consumers (even for games, mobile games tend to target the lowest common denominator to a much greater extent than PC games; no-one's going to launch a mobile game that runs on the latest couple of generations of iOS and nothing else).

iPhones do likely get better longevity out of their faster hardware, mind you.


I honestly don't know why you're being downvoted.

I think for some people (eg my oldest brother), the iPhone is "the best." He's not a tech person - and that might be perspective that you're missing.

I tend to agree that there are great androids that seem competitive with iPhones (disclosure: I haven't really owned a flagship phone for a while; this is one of android's issues! The flagships aren't that much better than the $300-400 phones)

But I know a lot of people who think the iPhone is the premium phone experience, and if they're going to spend over a thousand dollars on a phone, it'll be an iPhone.

My point here is that for many people, the iPhone is objectively better.


He is being downvoted by people who consider the choice of a specific vendor a religious issue and will vote down anyone who in their eyes disparages their vendor of choice. It is a sad fact that the existence of this phenomenon only leads to the reinforcement of the religious behaviour since their insistence on the infallibility of their vendor is questioned by those not part of their cult, whereupon they double down on the magnificence and superiority of their vendor's products over everything else, which leads to more reactions and the cycle continues.

The power/performance ratio of the M1 SoC does not seem to need religious adoration to be seen as a significant step in the establishment of ARM as a real competitor to the AMD64 hegemony. The wait is now for other vendors - Samsung, Qualcomm, etc - to launch similar "desktop-oriented" SoC's which can be used by both traditional "dekstop" vendors (HP, Dell, Lenovo and to a limited extent Samsung etc) as well as traditional mobile-oriented vendors like Xiaomi. Some of those vendors will eventually produce ARM-based platforms which support user-upgradeable memory, GPUs and storage - and possibly also user-upgradeable CPUs - like they do for AMD64-based systems. Once these systems become available ARM has a real chance to take over the market.


Great points. I remember back in (good lord) 2011 when the MRI research was shown [1]

I'm very excited about the new m1. While I'm tempted to buy it, I also can't wait to see what they do with it in their pro models. Or what the second generation looks like.

The competition is great for everyone, and Apple going all in on arm as they run from Intel's manufacturing failures is very well-timed.

I think this was somewhat inevitable, to be sure. Microsoft had Qualcomm and AMD processors in their surface [2], and I think as TSMC continues to dominate in manufacturing, we will see really exciting gains.

Also, I'm thrilled that CPUs are coming with tensor cores now. I think fast matrix operations might be separated from gpus (at the least, you no longer need a >$400 gpu for it), and that's a win for everyone, too.

In two years, we could be able to reimplement early neural network research on a macbook air! (I mean, you already can, but instead of taking minutes/hours/days, even more recent things become accessible)

[1] https://www.engadget.com/2011-05-17-bbc-loving-apple-looks-l...

[2] https://www.theverge.com/2019/10/2/20888999/microsoft-surfac...


I don't know about the other metrics you talk about, but iPhones _are_ the fastest. In general, I find iPhones better than Android for general use - but that's a subjective opinion. Apple provides the best overall package, even as the only thing they beat everyone else on is speed (to the best of my knowledge).


Plus they seem to last a lot longer than your average android smartphone in terms of longer term performance/updates. I'm holding out a (faint) hope that they'll make at least one generation with USBC before going port-less as that's the main thing stopping me from switching


I think this is often about the experience of using it as opposed to raw compute performance benchmarks (even if M1 does excel in those). In many cases, be it because of integration, more care about design, or whatever, the apple experience feels more... snappy? Smooth? It's hard to put a finger on what exactly leads to this, and it may also be about the combination of many small details.


Apple phones are faster both in subjective experience and objective benchmarks.


Snapdragon 8x5 CPUs have been fast enough for a while now, and you wouldn't notice a difference as long as you run a decent flavor of Android. However you can notice a faster screen refresh rate, and high end Android wins there. I wouldn't trade 90 (or 120hz) screens for a faster CPU.


But crucially, nowhere near as fast as iPhones: https://www.tomsguide.com/news/iphone-12-benchmarks-this-des...


In my experience that's not true. A Note20 ultra feels much faster and smoother to use than Iphone 12 pro.


.


I think it's a bit of both... I don't doubt for a second that Apple creating both the CPU and Software together are why the X86 emulation is as good as it is.

I'm actually thinking about getting one of these as I've been without a personal laptop for a while now. That said, I'm waiting for Docker support to get flushed out a bit more as I wouldn't mind getting work done. I am interested on how the really high performance ram effects my workflows, as I've often gone over 16gb as a sticking point, got my work laptop upgraded to 32gb earlier in the year and my personal desktop is at 64gb.

I'm not sure it will replace my r9-3950X desktop for getting things done, but it's absolutely compelling for its' space. I swore off macs for a while now, and this might pull me back in.


I have used the latest iphone and the latest galaxy note and honestly the note feels faster. Maybe because of the screen refresh? But then the same question: what use is a faster chip when the end result feels slower?


I don't know that's true anymore.

Overall experience feels better on android. Few things that make difference for me.

1. Notifications are leagues ahead of anything on iOS.

2. Little accessibility features like sound search, automatic caption, text selection from any screen/pictures, better integration with google assistant adds up.

3. Customizability. Yes, even now iPhone is super limited in the layout you can have on home screen or the new app launcher. You can't group things or put them wherever you want on the screen. This is ridiculous.

4. Android phones have higher refresh rates. Almost any flagship in 2020. The punch hole camera feels better than the notch. Face ID = problematic in the pandemic with masks. Though, I like it generally but apple could have given touch ID on the same device as well.

5. For tinkerers, it has better support again. Youtube vanced, tachiyomi, advanced adblockers, "real" firefox with extensions, etc are only available on android.

The best part is you can get the pixel 4a for $300 with some carriers. 3 years of updates which is what typical upgrade cycle looks like for iPhone users even if they get "updates" for longer.

Source: Own both.


You're 100% right on notifications -- despite having borrowed a bunch of ideas from Android on how to handle notifications, iOS still doesn't get it completely right.

There was a point in time that I was pretty deep into the Android scene, but at this point in my life I just want my phone to work consistently and have great battery life.

After many years of using Android devices (including many Nexus / Pixel devices), I switched to an iPhone 11 Pro Max last year. I've been very happy with that switch, my phone always makes it through a day of use and I haven't ran into any of the quirks that I encountered on my Android devices on a regular basis.

I completely understand why you would prefer Android, but there are definitely people (including myself) that are happier with iPhones.


Yes. I have the same iPhone too. I just don't get why so many people here are fixated on chipset differences when average people care more about having better quality mic, speakers, etc. Even camera is good enough on most devices these days. The photos will not have huge differences after being compressed on online platforms in the end. How many people actually care about having raw photos?

I just don't think any flagship in 2020 is worth it for an average consumer when options like pixel 4a exists. iPhone se is not good due to the screen but otherwise it would have been a great contender as well. People notice 90hz crisp display more than faster opening apps.


You're not wrong, but things like a unified clipboard, continuity, the peace of mind which comes from knowing that there's a very small chance of malware coming in from the App Store, and (again my opinion) more polished indie apps like Ulysses, Things, Fantastical etc. keep the iPhone ahead in my book.

I think notifications are perfectly fine on iPhones. I've not used android for 4 years now, but if their notifications are the same as they were in 2016 (or just marginally different) then I don't miss much. I confess point 2 sounds very appealing. I do use custom DNS for adblocking, so I don't think I miss it on the iPhone, and tbh I've fallen in the rabbit hole of tinkering with my phone enough that I feel more productive without a phone I can tinker with.


No, notifications have changed a lot in the recent years. And you can get unified clipboard by installing app for your OS (windows or linux) through playstore. You should try a new pixel device to see the improvement.

Some apps are better on iOS but it's not a big difference because some apps are better on android as well. People care about different apps in the end.


> I don't understand the comparison to the iphone and android situation.

Because articles like that are pure nonsense.

Take the initial paragraphs. We already knew that faster processors don't necessarily run hotter. People are simply conflating higher clock speeds with higher performance, while ignoring all of the ways CPU design optimizes computation. Likewise, we already knew that "emulation" doesn't imply that performance will be unbearably slow. There has been enormous amounts of research and development into the problem over the decades, to the point where much of our software is optimized at runtime.

None of this is meant to diminish what Apple has accomplished. They were clearly paying attention to details that have been largely ignored or at the very least rarely highlighted by their competitors. Now that it has happened, we will likely see more from the PC industry. Apple will likely have a hard time keeping up unless they have more up their sleeve.


Likewise, we already knew that "emulation" doesn't imply that performance will be unbearably slow.

Apple’s Rosetta 2 doesn't do emulation; it translates the x86 code into ARM code on the fly and runs it, which is why most apps run faster on M1 Macs than they do on Intel Macs--and comparably priced PCs.

Now that it has happened, we will likely see more from the PC industry.

Probably not. If Intel and AMD were capable of this kind of performance per watt, they’d already be doing it. Years ago.

Because the latest x86 chip has to be backwards compatible with 40 years of legacy code, there are certain limitations Intel and AMD just have to live with.

Apple will likely have a hard time keeping up unless they have more up their sleeve.

It's ironic that it's Apple that will have a hard time keeping up when they've already disrupted Intel and AMD. Remember, it's not just a processor; it's a system on a chip that contains an 8-core GPU, 16-core Neural Engine for machine learning tasks and other components.

This has obviously been in the works for several years; Apple wouldn't have gone down this path if they didn't believe it would give them a sustainable competitive advantage for years to come.

Even with its ridiculous performance, the M1 is the entry-level consumer version of the M series suitable for (in Apple's lineup) entry-level consumer hardware. The M1-based MacBook Air doesn't have a fan and it's faster and more capable now than many (most?) PCs that cost more and run hotter.

One thing we know: the M2 will be faster and more capable than the M1 and won't be limited to 16 Gb. We'll probably see M2-based Macs by the summer of 2021.


Agreed - this is also why markets are inefficient and it can be easy to make money buying stocks.

M1 is a clear example of something a lot better where competitors are unable to keep up or adapt.

Yet tons of people in this thread still make up (bad) reasons why Intel/AMD have nothing to worry about.

I remember similarly dumb arguments about the iPhone too in 2007. Some people are just unable to recognize major change even when it's blindingly obvious.

Status quo bias is strong.


> Apple’s Rosetta 2 doesn't do emulation; it translates the x86 code into ARM code on the fly and runs it

That’s…emulation.


Nope.

First line from Wikipedia[1]:

Rosetta is a dynamic binary translator developed by Apple Inc. for macOS…

Follow the link for Dynamic Binary Translator[2]:

Dynamic binary translation (DBT) looks at a short sequence of code—typically on the order of a single basic block—then translates it and caches the resulting sequence. Code is only translated as it is discovered and when possible, and branch instructions are made to point to already translated and saved code (memoization).

Dynamic binary translation differs from simple emulation (eliminating the emulator's main read-decode-execute loop—a major performance bottleneck), paying for this by large overhead during translation time. This overhead is hopefully amortized as translated code sequences are executed multiple times.

[1]: https://en.wikipedia.org/wiki/Rosetta_(software)

[2]: https://en.wikipedia.org/wiki/Binary_translation#Dynamic_bin...


Believe me, I know how Rosetta works. You don't need to quote Wikipedia at me ;)

Rosetta absolutely is an emulator, which has traditionally referred to taking code from one CPU architecture and making it run on another (in contrast with taking a binary of the same architecture but running it in another environment, which is often called a "compatibility layer" or in Apple's terms a "simulator", or the hardware-assisted version of this which is virtualization). There are many ways to write the part of an emulator that mimics another CPU; the simplest is an interpreter loop, but there's ahead-of-time and just-in-time code generation techniques as well. Dynamic binary translation is quite simple, as your quote mentions; Rosetta 2 instead achieves its performance through a combination of static binary translation to a form amenable to its runtime. Things that it cannot translate go through a fairly advanced optimizing JIT that goes way beyond what a dynarec would typically do.


M1 is still just a tablet SoC in a laptop form factor.

We don't know what the efficiency will look like after you add all the I/O you need for a real desktop, we don't know how the memory subsystem will deal with 4 more cores that are just as fast. What's the point of having fast cores if you cannot feed them?

We don't know how lower yields for bigger chips on the newest nodes will affect their costs.

People act like Apple changed the game or is light years ahead of everyone but they literally haven't made a proper desktop-class chip yet. It's not because they don't care about higher core chips or chips with proper IO. They haven't made them because they CAN'T (yet).

So yes, Apple does have catching up to do.


M1 is still just a tablet SoC in a laptop form factor.

That wouldn't be good enough for production hardware. The developer units had a previous iPad’s A12 SoC in an enclosure for developers to test things on. For example, the A series doesn't support virtualization because that's not required on a phone or tablet. But the M1 supports this and other laptop/desktop features. Even the A14 only has 4 cores compared to the M1's 8 cores. This isn't just a tablet SoC.

We don't know what the efficiency will look like after you add all the I/O you need for a real desktop, we don't know how the memory subsystem will deal with 4 more cores that are just as fast. What's the point of having fast cores if you cannot feed them?

Perhaps you haven't been paying attention, but the Mac mini is a real desktop. And all of the M1 Macs have blazingly fast memory access. Remember, it's an SoC—the memory is on the same die as the CPUs, GPUs and they all have equal access to it. No bus to go across. That's why it's so fast.

People act like Apple changed the game or is light years ahead of everyone but they literally haven't made a proper desktop-class chip yet.

The M1 already has features no "desktop" chip has, such as 8 instruction decoders. This may not sound like a big deal, but none of the Intel or AMD chips--not the threadripper or the Zen3—has more than 4. Why this matters:

    "It is because the ability to run fast depends
    on how quickly you can fill up the ROB with
    micro-ops and with how many. The more quickly
    you fill it up and the larger it is the more
    opportunities you are given to pick instructions
    you can execute in parallel and thus improve performance.

    Machine code instructions are chopped into micro-ops
    by what we call an instruction decoder. If we have more
    decoders we can chop up more instructions in parallel
    and thus fill up the ROB faster.

    And this is where we see the huge differences.
    The biggest baddest Intel and AMD microprocessor
    cores have 4 decoders, which means they can decode
    4 instructions in parallel spitting out micro-ops.

    But Apple has a crazy 8 decoders. Not only that
    but the ROB is something like 3x larger. You can
    basically hold 3x as many instructions. No other
    mainstream chip maker has that many decoders in
    their CPUs."
So Intel or AMD will just add 8 instruction decoders to increase their throughput, right? Nope:

    "However on an x86 CPU the decoders have no clue
    where the next instruction starts. It has to
    actually analyze each instruction in order to see
    how long it is.

    The brute force way Intel and AMD deal with this
    is by simply attempting to decode instructions at
    every posssible starting points. That means we have
    to deal with lots of wrong guesses and mistakes which
    has to be discarded. This creates such a convoluted
    and complicated decoder stage, that it is really
    hard to add more decoders. But for Apple it is
    trivial in comparison to keep adding more.

    In fact adding more causes so many other problems
    that 4 decoders according to AMD itself is basically
    an upper limit for how far they can go.
    *This is what allows the M1 Firestorm cores to
    essentially process twice as many instructions as
    AMD and Intel CPUs at the same clock frequency.*"
As I've said in other threads on HN, there's no way Apple's first laptop/desktop chip should be competitive with AMD, but it is:

    "As far as I remember from performance benchmarks
    the newest AMD CPU cores, the ones called Zen3
    are slightly faster than Firestorm cores. But
    here is the kicker, that only happens because the
    Zen3 cores are clocked at 5 GHz. Firestorm cores
    are clocked at 3.2 GHz. The Zen3 is just barely
    squeezing past Firestorm despite having almost
    60% higher clock frequency.

    So why doesn’t Apple increase the clock frequency
    too? Because higher clock frequency makes the chips
    hotter. That is one of Apple’s key selling points.
    Their computers unlike Intel and AMD offerings barely
    need cooling.

    In essence one could say Firestorm cores really
    are superior to Zen3 cores. Zen3 only manages to
    stay in the game by drawing a lot more current and
    getting a lot hotter. Something Apple simply chooses
    not to do."
So yes, Apple does have catching up to do.

Come again?

The Zen3 barely beats the M1, which runs at 60% of the speed and a small fraction of the power. It can essentially process twice as many instructions at the same clock frequency. They're already ahead in many key areas, with performance per watt being the most obvious. There are issues with the x86 architecture, like instructions ranging from 1 to 15 bytes, which limits Intel and AMD from being able to process as many instructions per clock cycle as ARM processors in general and the M1 especially.

I’m quoting from the article "Why is Apple’s M1 Chip So Fast?", which has a lot more technical details: https://erik-engheim.medium.com/why-is-apples-m1-chip-so-fas...


Another example - M1s can't do SMT, no one is denying that they are good chips but there's a lot of catching up for them to do.

> And all of the M1 Macs have blazingly fast memory access. Remember, it's an SoC—the memory is on the same die as the CPUs, GPUs and they all have equal access to it. No bus to go across.

What does this even mean? Memory works at a certain voltage, clock speed, and has a capped rate at which it can transfer data if those are kept constant. For DDR4-3200 it's generally 3,200 MTs (megatransfers/sec).

What does every core has equal access to it mean? What does soldering SDRAM change in terms how many transfers/sec it can do? What do you mean there is no bus?

> The M1 already has features no "desktop" chip has, such as 8 instruction decoders.

What makes going wide a "desktop" class feature? Going wide isn't free, there is transistor cost associated. A M1 chip has 16 billion transistors for 8 cores. AMD's EPYC Rome has 39.54 billion transistors for 64 cores. That's 8 times as many cores. And that's including ~9 billion transistors just for the I/O die.

If you look at just the chiplets, AMD's 8 Core complexes take up only 3.9 billion transistors each. And these are 8 high performance cores with SMT, even removing GPU/NPU engines from M1, AMDs design clearly has a lower transistor cost.

> The Zen3 barely beats the M1

AMD was't trying to make an entry level laptop chip, so it's not good at being a entry level laptop chip? How do you think M1 would fare if we try to compare them on datacenter CPU metrics?

I understand why everyone is excited, but they aren't looking at the whole picture.


Just read "Why Is Apple’s M1 Chip So Fast?" [1] to answer most of these questions.

I’ll just hit the obviously erroneous issues here.

A M1 chip has 16 billion transistors for 8 cores. AMD's EPYC Rome has 39.54 billion transistors for 64 cores.

It doesn't make sense to compare the M1 with the EPYC Rome. They couldn't more different and were developed for dramatically different use cases.

(I also don't think it's not a good look when folks feel compelled to state Apple's M1 isn't as fast as AMD's $4000 64-core processor, as if this means anything. Of course the EPYC Rome is faster--duh. Anyways…)

16 billion transistors on the M1 SoC (emphasis on system) gets you an 8-core GPU, a 16-core Neural Processor, a Digital Signal Processor, a Secure Enclave, image processing unit, video encoder/decoder and of course 8 CPU cores and 8 or 16 Gb of memory.

Instruction decoders deal with the out of order execution of the instructions; more decoders enables you to have more instructions "in flight" as they say for the execution cores to run in parallel.

Again, you should read the article for the details, but due to the legacy architecture and technical debt of a 40 year-old instruction set, not to mention with instructions ranging in size from 1 to 15 bytes, it's not easy for the decoders to juggle all of the instructions to do out of order execution of instructions. Yes, even the EPYC Rome only has 4 instruction decoders.

Meanwhile, the instructions for Apple's Firestorm cores are all the same size, making decoding the instructions much easier and faster; so much so that the M1 has 8 decoders, which is unheard of in a mainstream processor.

Here's the bottom line quote from the article: "In fact, adding more causes so many other problems that four decoders according to AMD itself is basically an upper limit for how far they can go. This is what allows the M1 Firestorm cores to essentially process twice as many instructions as AMD and Intel CPUs at the same clock frequency."

Apple and AMD solved the "get as many instructions in the pipeline as possible and process them as quickly as possible" problem two different ways.

AMD does it with threading and many cores; Apple does it by increasing the number of instruction decoders and by having 60+ Gb/s bandwidth with a 128-bit bus. Apple didn't implement SMT because they don't need it; they have no issues with keeping their instruction pipeline full.

Bottom line: Apple's approach has more headroom than AMD's approach. They can't add more instruction decoders, so they crank up the clock speed and the threads to execute more instructions per clock cycle. But the M1 is so efficient, it processes twice as many instructions at the same clock frequency.

The M1 runs at 3.2 GHz; it's got a higher ceiling because Apple can add more cores and increase the clock speed a lot before it gets anywhere near the power consumption and other physical limits that AMD is starting to run up against.

[1]: https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...


> It doesn't make sense to compare the M1 with the EPYC Rome. They couldn't more different and were developed for dramatically different use cases.

The author of your beloved article compared M1 to Zen 3, and all Zen 3 CPUs use the same 8-core chiplet. From the cheapest 6-core Ryzen 3 5600 to the 64 core EPYC. All of them are made of the the exact same chiplets, just then number of chiplets varies. If you have 2 chiplet you get 16 cores, etc.

> Yes, even the EPYC Rome only has 4 instruction decoders.

It doesn't have "4 decoders"; it's just one 4-way instruction decoder. It's not the same thing.

> Firestorm cores to essentially process twice as many instructions as AMD and Intel CPUs at the same clock frequency.

A decoder doesn't not process instructions, it only interprets them and splits it up into smaller instructions. It doesn't make sense to compare number of instructions across different ISAs anyway, ARM has smaller, simpler, and on account of that more numerous instructions. One x86-64 instruction more often than not maps to multiple ARM instructions. It just makes no sense to compare counts.

I sort of stopped reading your comment around here, sorry about that.



Flagship iPhone absolutely crushes any flagship Android phone in benchmarks its not even close. Has been like this for many years.

It is also very common that people that like Android are not aware of this for some reason.


It also basically doesn't matter for 99% of people. Most people just need their phone to be fast enough to run common applications responsively. Benchmarks are irrelevant to the average user, and are mostly a dick measuring contest. People tend to prefer mobile platforms on the basis of features that they offer, practical hardware traits like battery and connectivity, and familiarity.


Completely agree with your sentiment. But a lot of people are used to thinking about performance like it much matters anymore. You’d be hard pressed to find much of a difference between yearly updates of the same model phone. But I think a lot in this community will look at those numbers, I mean that’s how most people grew up when thinking about PCs and seems to be how Android devices have also marketed to people. I think now the majority of people just upgrade to the latest phone of the OS they prefer. I mean personally after deleting social media and a lot of other items off my phone I wonder what I really use my phone for other than texting, taking pictures and checking email and my calendar. I could easily get by with some basic phone (honestly contemplated getting some e ink phone with basic capabilities). But there are plenty of people out there that use their phone constantly and for a host of things. All that to say I feel like phones won’t have another large advance for some time, small iterative changes. That jump from the original iPhone to the iPhone 3G seemed gigantic. Sure there have been form factor changes but even on the android front there have been changes but not things that are revolutionary to the majority of users.


Can't speak for anyone else, but for me, the biggest sticking point is being able to side/load software and other app stores... not that I pull in much, but I don't want my device mfg tell me what I'm allowed to run on what I buy.

Sometimes this means the software is more clunky, less polished and isn't as fast... but I'm still using my now 3yo Pixel 2XL, which is still mostly acceptable (I really need a new battery).


AnTuTu Benchmark smartphones ranking:

Vivo Iqoo 5 Pro 671,218

OnePlus 8 Pro 590,112

iPhone 12 Pro 579,778

As ever, it depends what you're measuring


Camera and screen have nothing to do with CPUs, right? And battery is also compared to size, so difficult to compare.

AFAIK, iPhones do have the fastest benchmarks (and the smoothest experience) across all phones.


There is quite a significant difference between iOS and android devices, there is no android device that competes with the performance of iOS right now. Heck, most laptops don’t even compete with iPads these days.


The whole isn't the fastest, the biggest comes up with big caveats.

For example the iPhones don't have the longest battery, but the phone that has the longest battery has two times the amount of battery, while having a much higher consumption than iPhones. If Apple decides to stick a 5000+ mah battery in an iPhone - Android phones would have no way of catching up to its battery life.

Another big thing being the latest screen tech - Apple can't afford to do an OLED screen on the iPad, since Samsung and LG can't possibly manufacture enough OLED screens to meet demand. Where Samsung is perfectly fine doing it, when they won't sell enough tablets to matter.

In the end it's not that the iPhone is clearly inferior to Androids, it's mostly that Apple has some numbers in mind that they consider optimal and going beyond those won't increase their sales (certain amount of battery, certain screens, etc).


An Android could have amazing hardware but everyone knows the OS isn't going to get updated and you're going to have to pay to replace it in 2-3 years.


> The longest battery isn't on an iphone, the best camera isn't on an iphone, the latest screen tech isn't on an iphone, etc. etc.

It is not on any single Android model either. Fragmentation is a thing.


Lol what are you smoking? iPhone as a whole has trounced Android repeatedly over the last few years. There really is no comparison aside from Android having some [random esoteric] hardware that iPhone doesn't for a particular year. None of this is focused on battery life, performance, and efficiency. It is window dressing on top of an inferior architecture.

Also I fundamentally will not buy a piece of hardware that will cease receiving security updates within 36 months of purchase. That's insane, and frankly a major fuck you to anyone considering buying your product.


What can you do on iphone that can't be done better on an android phone?


Two that leap to mind immediately: Install the latest OS on a device older than a year old. Get money for your old phone when you upgrade.


But those have nothing to do with the supposedly "incredible" chip performance Apple has that Qualcomm doesn't.


Resale value absolutely has a lot to do with the better chip performance. You can't sell a dog-slow phone.


You have reduced the scope of the argument. It is a question of efficiency, performance, and battery life. iPhone beats Android in all over those categories.

Of course iPhone and Android can do all the same tasks. One just does it a lot better efficiency and power wise.


How does iPhone beat Note20 in battery life if the Note lasts longer?

How does the chip performance matter if the Note feels faster in opening apps and scrolling due to the refresh rate?

How does efficiency matter if the Note holds more apps in memory for faster task switching?

How does a faster chip matter if the image processing is way superior in the Note so your photos almost always come out better?

The difference is subjective. I would have expected it to be objective like in the M1 laptop. The new air and pro absolutely obliterate other comparable systems in every metric.


The fact that reviewers of M1 chips are also saying things like "Safari is faster than Chrome" or "iOS is faster than Android" do make the other statements on M1 lose credibility.

"Safari is faster than Chrome" NOPE Now that IE is gone, Safari is the new IE by being the last not-ever-green browser always behind in features. So many benchmarks again and again proved Chrome and Firefox to be faster than Safari.

"iOS is faster than Android" NOPE It just depends on the device. Of course the latest iPhone is faster than a 300$ mid-low range Android device. Pick the top Android devices vs iPhones and you will see it's a tough race with no clear winner.


> So many benchmarks again and again proved Chrome and Firefox to be faster than Safari.

In the benchmarks I've seen (https://arstechnica.com/gadgets/2020/11/google-chrome-is-ava...) and the ones I've run myself, Safari annihilates Chrome.

> Pick the top Android devices vs iPhones and you will see it's a tough race with no clear winner.

No, all Android phones are clearly slower than the iPhone. https://www.tomsguide.com/news/iphone-12-benchmarks-this-des...


It turns out that the benchmarks for M1 vs latest generation Intel & AMD CPU's are indeed overblown, and it is an incremental improvement more than a great leap forward.

The source of the confusion has been the benchmarking software. To saturate one core on an Intel processor you need to run two threads, because that's the way they are designed. So the single thread benchmarks that have been used so far have been using 50% of the capacity of an Intel CPU core and comparing it with 100% of the capacity of an M1 CPU core.

This article breaks it down fully: https://wccftech.com/why-apple-m1-single-core-comparisons-ar...


Have you read the comments on that article? That article is not nearly of a smoking gun as the possibly clickbait title.

More discussion: https://linustechtips.com/topic/1276532-exclusive-why-apple-...


I didn't until you linked to it, but having read it it still does appear that comparing single thread to single thread is not very accurate. Neither would core to core considering single thread does hold some weight compared to real world workloads.

I guess at the very least CPU benchmarking software should have a "thread to thread" benchmarker alongside a "core to core" benchmarker, or something along those lines.

That would be in the spirit of having benchmarks indicate real world usage


Eh? I mean, by the same token you could say that single-core benchmarks of Intel chips are invalid, because you need eight threads or whatever to saturate a POWER7's multi-way SMT.

The main purpose of single-threaded benchmarks is to approximate performance for things which are actually single-threaded.


That article makes no sense whatsoever. And whoever wrote it doesn't not understand what they're talking about.

SMT is designed to boost performance in multitreaded workloads.

It can be thought as multitasking for a CPU core.

Using two threads for one benchmark and use one thread for another, it is not comparing single core performance.

Because why wouldn't that CPU schedule the load on other cores?


Of course, the scheduler can easily be told to run both benchmark threads on the two hyperthreads of a given core.

The argument to be made is that because of the resources dedicated to SMT, single thread on AMD/Intel versus single thread on Apple is not measuring the true potential performance of the whole core. In principle, some multithreaded workload over all available threads could be a better metric for whole-processor performance.


Fair enough. But if you're comparing single threaded performance, there isn't a reason to split the workload into two threads for AMD/Intel and have it as a single thread for Apple.

If I had a single threaded application or a single threaded critical path of a multithreaded application.

I don't see a scenario where SMT will help

AnandTech have done a comparison here: https://www.anandtech.com/show/16261/investigating-performan...

You can see it's all within 1% with it on and off for single threaded.


Benchmarking multicore Performance+Efficiency (Apple) versus SMT (AMD) versus SMT+Wide vectors (Intel) is never going to provide perfect apples-to-apples comparisons. There's an entirely reasonable argument that single-thread performance is oversold as a metric, and that the focus on it advantages some platforms over others.

At the end of the day, benchmarks are inherently only an approximate measure of how real-world code will perform. SMT, basically by definition, is rarely going to benchmark well, but is inherently going to show more of a benefit when running real-world mixed workloads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: