Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand the comparison to the iphone and android situation. As far as I can see, android phones are highly competitive with iphone on everything. The longest battery isn't on an iphone, the best camera isn't on an iphone, the latest screen tech isn't on an iphone, etc. etc.

Coming to the M1 laptops, they beat comparable Windows laptops in every single metric. It's literally just a matter of looking at the numbers and realizing how M1 has every Intel chip beat (in this category).

In fact this makes me wonder: if Apple chips are so far ahead, why are iphones not the fastest, best smartphones by a wide margin?



> In fact this makes me wonder: if Apple chips are so far ahead, why are iphones not the fastest, best smartphones by a wide margin?

Apple has pretty consistently had the fastest SoCs for phones and tablets for a few years now, particularly in single core performance. Here are some benchmark results of the iPhone 12 versus some of it’s Android competitors [1]. Notice that last year’s iPhone 11 Pro Max still outperforms Android phones from this year as well.

I think part of the reason this isn’t a major differentiator (for phones at least) is that phones have been “fast enough” for several years now. Given the very aggressive throttling that is utilized on these types of devices, I think the primary difference ends up being battery life / efficiency rather than raw performance.

It’s also worth noting that while the performance / efficiency gap between x86 and ARM seems to be pretty sizable at this point, the gap between various current generation ARM CPUs isn’t nearly as drastic.

1: https://www.tomsguide.com/news/iphone-12-benchmarks-this-des...


> It’s also worth noting that while the performance / efficiency gap between x86 and ARM seems to be pretty sizable at this point

It's really not, though. You have to compare performance at a given power target to judge performance/efficiency, which you can't do for most ARM CPUs. But you can get 8 core x86 CPUs that are 15W, or 1.8w per core (example: the 4800U). Similarly 64 core Epycs are around or less than 3W per core. These are all well within similar per-core power numbers as your typical big-core ARM chip.

The perceived gap is larger than it really is, because there just isn't really an x86 tablet-focused market. The M1's jump into laptop-space puts it up and we can really see it shine, but the other ARM jumps into laptop-space were a joke. So comparing equivalent power targets becomes challenging.

But take your articles benchmarks for example. The Snapdragon manages something like 60% of the performance of the A12's. So take the existing M1 mac, cut the performance drastically, and suddenly it'd be not interesting at all. The performance/efficiency wouldn't be impressive at all if it was getting 60% of the performance it is now.


Yeah that's what puzzles me, why has chip superiority not translated into an objectively better phone? For laptops the M1 laptops are best in class in everything, battery and performance.


To a large extent, current Android flagship CPUs are "good enough" for the things people do with phones. Sure, current iPhone chips are much faster, but it doesn't matter that much for the vast majority of consumers (even for games, mobile games tend to target the lowest common denominator to a much greater extent than PC games; no-one's going to launch a mobile game that runs on the latest couple of generations of iOS and nothing else).

iPhones do likely get better longevity out of their faster hardware, mind you.


I honestly don't know why you're being downvoted.

I think for some people (eg my oldest brother), the iPhone is "the best." He's not a tech person - and that might be perspective that you're missing.

I tend to agree that there are great androids that seem competitive with iPhones (disclosure: I haven't really owned a flagship phone for a while; this is one of android's issues! The flagships aren't that much better than the $300-400 phones)

But I know a lot of people who think the iPhone is the premium phone experience, and if they're going to spend over a thousand dollars on a phone, it'll be an iPhone.

My point here is that for many people, the iPhone is objectively better.


He is being downvoted by people who consider the choice of a specific vendor a religious issue and will vote down anyone who in their eyes disparages their vendor of choice. It is a sad fact that the existence of this phenomenon only leads to the reinforcement of the religious behaviour since their insistence on the infallibility of their vendor is questioned by those not part of their cult, whereupon they double down on the magnificence and superiority of their vendor's products over everything else, which leads to more reactions and the cycle continues.

The power/performance ratio of the M1 SoC does not seem to need religious adoration to be seen as a significant step in the establishment of ARM as a real competitor to the AMD64 hegemony. The wait is now for other vendors - Samsung, Qualcomm, etc - to launch similar "desktop-oriented" SoC's which can be used by both traditional "dekstop" vendors (HP, Dell, Lenovo and to a limited extent Samsung etc) as well as traditional mobile-oriented vendors like Xiaomi. Some of those vendors will eventually produce ARM-based platforms which support user-upgradeable memory, GPUs and storage - and possibly also user-upgradeable CPUs - like they do for AMD64-based systems. Once these systems become available ARM has a real chance to take over the market.


Great points. I remember back in (good lord) 2011 when the MRI research was shown [1]

I'm very excited about the new m1. While I'm tempted to buy it, I also can't wait to see what they do with it in their pro models. Or what the second generation looks like.

The competition is great for everyone, and Apple going all in on arm as they run from Intel's manufacturing failures is very well-timed.

I think this was somewhat inevitable, to be sure. Microsoft had Qualcomm and AMD processors in their surface [2], and I think as TSMC continues to dominate in manufacturing, we will see really exciting gains.

Also, I'm thrilled that CPUs are coming with tensor cores now. I think fast matrix operations might be separated from gpus (at the least, you no longer need a >$400 gpu for it), and that's a win for everyone, too.

In two years, we could be able to reimplement early neural network research on a macbook air! (I mean, you already can, but instead of taking minutes/hours/days, even more recent things become accessible)

[1] https://www.engadget.com/2011-05-17-bbc-loving-apple-looks-l...

[2] https://www.theverge.com/2019/10/2/20888999/microsoft-surfac...


I don't know about the other metrics you talk about, but iPhones _are_ the fastest. In general, I find iPhones better than Android for general use - but that's a subjective opinion. Apple provides the best overall package, even as the only thing they beat everyone else on is speed (to the best of my knowledge).


Plus they seem to last a lot longer than your average android smartphone in terms of longer term performance/updates. I'm holding out a (faint) hope that they'll make at least one generation with USBC before going port-less as that's the main thing stopping me from switching


I think this is often about the experience of using it as opposed to raw compute performance benchmarks (even if M1 does excel in those). In many cases, be it because of integration, more care about design, or whatever, the apple experience feels more... snappy? Smooth? It's hard to put a finger on what exactly leads to this, and it may also be about the combination of many small details.


Apple phones are faster both in subjective experience and objective benchmarks.


Snapdragon 8x5 CPUs have been fast enough for a while now, and you wouldn't notice a difference as long as you run a decent flavor of Android. However you can notice a faster screen refresh rate, and high end Android wins there. I wouldn't trade 90 (or 120hz) screens for a faster CPU.


But crucially, nowhere near as fast as iPhones: https://www.tomsguide.com/news/iphone-12-benchmarks-this-des...


In my experience that's not true. A Note20 ultra feels much faster and smoother to use than Iphone 12 pro.


.


I think it's a bit of both... I don't doubt for a second that Apple creating both the CPU and Software together are why the X86 emulation is as good as it is.

I'm actually thinking about getting one of these as I've been without a personal laptop for a while now. That said, I'm waiting for Docker support to get flushed out a bit more as I wouldn't mind getting work done. I am interested on how the really high performance ram effects my workflows, as I've often gone over 16gb as a sticking point, got my work laptop upgraded to 32gb earlier in the year and my personal desktop is at 64gb.

I'm not sure it will replace my r9-3950X desktop for getting things done, but it's absolutely compelling for its' space. I swore off macs for a while now, and this might pull me back in.


I have used the latest iphone and the latest galaxy note and honestly the note feels faster. Maybe because of the screen refresh? But then the same question: what use is a faster chip when the end result feels slower?


I don't know that's true anymore.

Overall experience feels better on android. Few things that make difference for me.

1. Notifications are leagues ahead of anything on iOS.

2. Little accessibility features like sound search, automatic caption, text selection from any screen/pictures, better integration with google assistant adds up.

3. Customizability. Yes, even now iPhone is super limited in the layout you can have on home screen or the new app launcher. You can't group things or put them wherever you want on the screen. This is ridiculous.

4. Android phones have higher refresh rates. Almost any flagship in 2020. The punch hole camera feels better than the notch. Face ID = problematic in the pandemic with masks. Though, I like it generally but apple could have given touch ID on the same device as well.

5. For tinkerers, it has better support again. Youtube vanced, tachiyomi, advanced adblockers, "real" firefox with extensions, etc are only available on android.

The best part is you can get the pixel 4a for $300 with some carriers. 3 years of updates which is what typical upgrade cycle looks like for iPhone users even if they get "updates" for longer.

Source: Own both.


You're 100% right on notifications -- despite having borrowed a bunch of ideas from Android on how to handle notifications, iOS still doesn't get it completely right.

There was a point in time that I was pretty deep into the Android scene, but at this point in my life I just want my phone to work consistently and have great battery life.

After many years of using Android devices (including many Nexus / Pixel devices), I switched to an iPhone 11 Pro Max last year. I've been very happy with that switch, my phone always makes it through a day of use and I haven't ran into any of the quirks that I encountered on my Android devices on a regular basis.

I completely understand why you would prefer Android, but there are definitely people (including myself) that are happier with iPhones.


Yes. I have the same iPhone too. I just don't get why so many people here are fixated on chipset differences when average people care more about having better quality mic, speakers, etc. Even camera is good enough on most devices these days. The photos will not have huge differences after being compressed on online platforms in the end. How many people actually care about having raw photos?

I just don't think any flagship in 2020 is worth it for an average consumer when options like pixel 4a exists. iPhone se is not good due to the screen but otherwise it would have been a great contender as well. People notice 90hz crisp display more than faster opening apps.


You're not wrong, but things like a unified clipboard, continuity, the peace of mind which comes from knowing that there's a very small chance of malware coming in from the App Store, and (again my opinion) more polished indie apps like Ulysses, Things, Fantastical etc. keep the iPhone ahead in my book.

I think notifications are perfectly fine on iPhones. I've not used android for 4 years now, but if their notifications are the same as they were in 2016 (or just marginally different) then I don't miss much. I confess point 2 sounds very appealing. I do use custom DNS for adblocking, so I don't think I miss it on the iPhone, and tbh I've fallen in the rabbit hole of tinkering with my phone enough that I feel more productive without a phone I can tinker with.


No, notifications have changed a lot in the recent years. And you can get unified clipboard by installing app for your OS (windows or linux) through playstore. You should try a new pixel device to see the improvement.

Some apps are better on iOS but it's not a big difference because some apps are better on android as well. People care about different apps in the end.


> I don't understand the comparison to the iphone and android situation.

Because articles like that are pure nonsense.

Take the initial paragraphs. We already knew that faster processors don't necessarily run hotter. People are simply conflating higher clock speeds with higher performance, while ignoring all of the ways CPU design optimizes computation. Likewise, we already knew that "emulation" doesn't imply that performance will be unbearably slow. There has been enormous amounts of research and development into the problem over the decades, to the point where much of our software is optimized at runtime.

None of this is meant to diminish what Apple has accomplished. They were clearly paying attention to details that have been largely ignored or at the very least rarely highlighted by their competitors. Now that it has happened, we will likely see more from the PC industry. Apple will likely have a hard time keeping up unless they have more up their sleeve.


Likewise, we already knew that "emulation" doesn't imply that performance will be unbearably slow.

Apple’s Rosetta 2 doesn't do emulation; it translates the x86 code into ARM code on the fly and runs it, which is why most apps run faster on M1 Macs than they do on Intel Macs--and comparably priced PCs.

Now that it has happened, we will likely see more from the PC industry.

Probably not. If Intel and AMD were capable of this kind of performance per watt, they’d already be doing it. Years ago.

Because the latest x86 chip has to be backwards compatible with 40 years of legacy code, there are certain limitations Intel and AMD just have to live with.

Apple will likely have a hard time keeping up unless they have more up their sleeve.

It's ironic that it's Apple that will have a hard time keeping up when they've already disrupted Intel and AMD. Remember, it's not just a processor; it's a system on a chip that contains an 8-core GPU, 16-core Neural Engine for machine learning tasks and other components.

This has obviously been in the works for several years; Apple wouldn't have gone down this path if they didn't believe it would give them a sustainable competitive advantage for years to come.

Even with its ridiculous performance, the M1 is the entry-level consumer version of the M series suitable for (in Apple's lineup) entry-level consumer hardware. The M1-based MacBook Air doesn't have a fan and it's faster and more capable now than many (most?) PCs that cost more and run hotter.

One thing we know: the M2 will be faster and more capable than the M1 and won't be limited to 16 Gb. We'll probably see M2-based Macs by the summer of 2021.


Agreed - this is also why markets are inefficient and it can be easy to make money buying stocks.

M1 is a clear example of something a lot better where competitors are unable to keep up or adapt.

Yet tons of people in this thread still make up (bad) reasons why Intel/AMD have nothing to worry about.

I remember similarly dumb arguments about the iPhone too in 2007. Some people are just unable to recognize major change even when it's blindingly obvious.

Status quo bias is strong.


> Apple’s Rosetta 2 doesn't do emulation; it translates the x86 code into ARM code on the fly and runs it

That’s…emulation.


Nope.

First line from Wikipedia[1]:

Rosetta is a dynamic binary translator developed by Apple Inc. for macOS…

Follow the link for Dynamic Binary Translator[2]:

Dynamic binary translation (DBT) looks at a short sequence of code—typically on the order of a single basic block—then translates it and caches the resulting sequence. Code is only translated as it is discovered and when possible, and branch instructions are made to point to already translated and saved code (memoization).

Dynamic binary translation differs from simple emulation (eliminating the emulator's main read-decode-execute loop—a major performance bottleneck), paying for this by large overhead during translation time. This overhead is hopefully amortized as translated code sequences are executed multiple times.

[1]: https://en.wikipedia.org/wiki/Rosetta_(software)

[2]: https://en.wikipedia.org/wiki/Binary_translation#Dynamic_bin...


Believe me, I know how Rosetta works. You don't need to quote Wikipedia at me ;)

Rosetta absolutely is an emulator, which has traditionally referred to taking code from one CPU architecture and making it run on another (in contrast with taking a binary of the same architecture but running it in another environment, which is often called a "compatibility layer" or in Apple's terms a "simulator", or the hardware-assisted version of this which is virtualization). There are many ways to write the part of an emulator that mimics another CPU; the simplest is an interpreter loop, but there's ahead-of-time and just-in-time code generation techniques as well. Dynamic binary translation is quite simple, as your quote mentions; Rosetta 2 instead achieves its performance through a combination of static binary translation to a form amenable to its runtime. Things that it cannot translate go through a fairly advanced optimizing JIT that goes way beyond what a dynarec would typically do.


M1 is still just a tablet SoC in a laptop form factor.

We don't know what the efficiency will look like after you add all the I/O you need for a real desktop, we don't know how the memory subsystem will deal with 4 more cores that are just as fast. What's the point of having fast cores if you cannot feed them?

We don't know how lower yields for bigger chips on the newest nodes will affect their costs.

People act like Apple changed the game or is light years ahead of everyone but they literally haven't made a proper desktop-class chip yet. It's not because they don't care about higher core chips or chips with proper IO. They haven't made them because they CAN'T (yet).

So yes, Apple does have catching up to do.


M1 is still just a tablet SoC in a laptop form factor.

That wouldn't be good enough for production hardware. The developer units had a previous iPad’s A12 SoC in an enclosure for developers to test things on. For example, the A series doesn't support virtualization because that's not required on a phone or tablet. But the M1 supports this and other laptop/desktop features. Even the A14 only has 4 cores compared to the M1's 8 cores. This isn't just a tablet SoC.

We don't know what the efficiency will look like after you add all the I/O you need for a real desktop, we don't know how the memory subsystem will deal with 4 more cores that are just as fast. What's the point of having fast cores if you cannot feed them?

Perhaps you haven't been paying attention, but the Mac mini is a real desktop. And all of the M1 Macs have blazingly fast memory access. Remember, it's an SoC—the memory is on the same die as the CPUs, GPUs and they all have equal access to it. No bus to go across. That's why it's so fast.

People act like Apple changed the game or is light years ahead of everyone but they literally haven't made a proper desktop-class chip yet.

The M1 already has features no "desktop" chip has, such as 8 instruction decoders. This may not sound like a big deal, but none of the Intel or AMD chips--not the threadripper or the Zen3—has more than 4. Why this matters:

    "It is because the ability to run fast depends
    on how quickly you can fill up the ROB with
    micro-ops and with how many. The more quickly
    you fill it up and the larger it is the more
    opportunities you are given to pick instructions
    you can execute in parallel and thus improve performance.

    Machine code instructions are chopped into micro-ops
    by what we call an instruction decoder. If we have more
    decoders we can chop up more instructions in parallel
    and thus fill up the ROB faster.

    And this is where we see the huge differences.
    The biggest baddest Intel and AMD microprocessor
    cores have 4 decoders, which means they can decode
    4 instructions in parallel spitting out micro-ops.

    But Apple has a crazy 8 decoders. Not only that
    but the ROB is something like 3x larger. You can
    basically hold 3x as many instructions. No other
    mainstream chip maker has that many decoders in
    their CPUs."
So Intel or AMD will just add 8 instruction decoders to increase their throughput, right? Nope:

    "However on an x86 CPU the decoders have no clue
    where the next instruction starts. It has to
    actually analyze each instruction in order to see
    how long it is.

    The brute force way Intel and AMD deal with this
    is by simply attempting to decode instructions at
    every posssible starting points. That means we have
    to deal with lots of wrong guesses and mistakes which
    has to be discarded. This creates such a convoluted
    and complicated decoder stage, that it is really
    hard to add more decoders. But for Apple it is
    trivial in comparison to keep adding more.

    In fact adding more causes so many other problems
    that 4 decoders according to AMD itself is basically
    an upper limit for how far they can go.
    *This is what allows the M1 Firestorm cores to
    essentially process twice as many instructions as
    AMD and Intel CPUs at the same clock frequency.*"
As I've said in other threads on HN, there's no way Apple's first laptop/desktop chip should be competitive with AMD, but it is:

    "As far as I remember from performance benchmarks
    the newest AMD CPU cores, the ones called Zen3
    are slightly faster than Firestorm cores. But
    here is the kicker, that only happens because the
    Zen3 cores are clocked at 5 GHz. Firestorm cores
    are clocked at 3.2 GHz. The Zen3 is just barely
    squeezing past Firestorm despite having almost
    60% higher clock frequency.

    So why doesn’t Apple increase the clock frequency
    too? Because higher clock frequency makes the chips
    hotter. That is one of Apple’s key selling points.
    Their computers unlike Intel and AMD offerings barely
    need cooling.

    In essence one could say Firestorm cores really
    are superior to Zen3 cores. Zen3 only manages to
    stay in the game by drawing a lot more current and
    getting a lot hotter. Something Apple simply chooses
    not to do."
So yes, Apple does have catching up to do.

Come again?

The Zen3 barely beats the M1, which runs at 60% of the speed and a small fraction of the power. It can essentially process twice as many instructions at the same clock frequency. They're already ahead in many key areas, with performance per watt being the most obvious. There are issues with the x86 architecture, like instructions ranging from 1 to 15 bytes, which limits Intel and AMD from being able to process as many instructions per clock cycle as ARM processors in general and the M1 especially.

I’m quoting from the article "Why is Apple’s M1 Chip So Fast?", which has a lot more technical details: https://erik-engheim.medium.com/why-is-apples-m1-chip-so-fas...


Another example - M1s can't do SMT, no one is denying that they are good chips but there's a lot of catching up for them to do.

> And all of the M1 Macs have blazingly fast memory access. Remember, it's an SoC—the memory is on the same die as the CPUs, GPUs and they all have equal access to it. No bus to go across.

What does this even mean? Memory works at a certain voltage, clock speed, and has a capped rate at which it can transfer data if those are kept constant. For DDR4-3200 it's generally 3,200 MTs (megatransfers/sec).

What does every core has equal access to it mean? What does soldering SDRAM change in terms how many transfers/sec it can do? What do you mean there is no bus?

> The M1 already has features no "desktop" chip has, such as 8 instruction decoders.

What makes going wide a "desktop" class feature? Going wide isn't free, there is transistor cost associated. A M1 chip has 16 billion transistors for 8 cores. AMD's EPYC Rome has 39.54 billion transistors for 64 cores. That's 8 times as many cores. And that's including ~9 billion transistors just for the I/O die.

If you look at just the chiplets, AMD's 8 Core complexes take up only 3.9 billion transistors each. And these are 8 high performance cores with SMT, even removing GPU/NPU engines from M1, AMDs design clearly has a lower transistor cost.

> The Zen3 barely beats the M1

AMD was't trying to make an entry level laptop chip, so it's not good at being a entry level laptop chip? How do you think M1 would fare if we try to compare them on datacenter CPU metrics?

I understand why everyone is excited, but they aren't looking at the whole picture.


Just read "Why Is Apple’s M1 Chip So Fast?" [1] to answer most of these questions.

I’ll just hit the obviously erroneous issues here.

A M1 chip has 16 billion transistors for 8 cores. AMD's EPYC Rome has 39.54 billion transistors for 64 cores.

It doesn't make sense to compare the M1 with the EPYC Rome. They couldn't more different and were developed for dramatically different use cases.

(I also don't think it's not a good look when folks feel compelled to state Apple's M1 isn't as fast as AMD's $4000 64-core processor, as if this means anything. Of course the EPYC Rome is faster--duh. Anyways…)

16 billion transistors on the M1 SoC (emphasis on system) gets you an 8-core GPU, a 16-core Neural Processor, a Digital Signal Processor, a Secure Enclave, image processing unit, video encoder/decoder and of course 8 CPU cores and 8 or 16 Gb of memory.

Instruction decoders deal with the out of order execution of the instructions; more decoders enables you to have more instructions "in flight" as they say for the execution cores to run in parallel.

Again, you should read the article for the details, but due to the legacy architecture and technical debt of a 40 year-old instruction set, not to mention with instructions ranging in size from 1 to 15 bytes, it's not easy for the decoders to juggle all of the instructions to do out of order execution of instructions. Yes, even the EPYC Rome only has 4 instruction decoders.

Meanwhile, the instructions for Apple's Firestorm cores are all the same size, making decoding the instructions much easier and faster; so much so that the M1 has 8 decoders, which is unheard of in a mainstream processor.

Here's the bottom line quote from the article: "In fact, adding more causes so many other problems that four decoders according to AMD itself is basically an upper limit for how far they can go. This is what allows the M1 Firestorm cores to essentially process twice as many instructions as AMD and Intel CPUs at the same clock frequency."

Apple and AMD solved the "get as many instructions in the pipeline as possible and process them as quickly as possible" problem two different ways.

AMD does it with threading and many cores; Apple does it by increasing the number of instruction decoders and by having 60+ Gb/s bandwidth with a 128-bit bus. Apple didn't implement SMT because they don't need it; they have no issues with keeping their instruction pipeline full.

Bottom line: Apple's approach has more headroom than AMD's approach. They can't add more instruction decoders, so they crank up the clock speed and the threads to execute more instructions per clock cycle. But the M1 is so efficient, it processes twice as many instructions at the same clock frequency.

The M1 runs at 3.2 GHz; it's got a higher ceiling because Apple can add more cores and increase the clock speed a lot before it gets anywhere near the power consumption and other physical limits that AMD is starting to run up against.

[1]: https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...


> It doesn't make sense to compare the M1 with the EPYC Rome. They couldn't more different and were developed for dramatically different use cases.

The author of your beloved article compared M1 to Zen 3, and all Zen 3 CPUs use the same 8-core chiplet. From the cheapest 6-core Ryzen 3 5600 to the 64 core EPYC. All of them are made of the the exact same chiplets, just then number of chiplets varies. If you have 2 chiplet you get 16 cores, etc.

> Yes, even the EPYC Rome only has 4 instruction decoders.

It doesn't have "4 decoders"; it's just one 4-way instruction decoder. It's not the same thing.

> Firestorm cores to essentially process twice as many instructions as AMD and Intel CPUs at the same clock frequency.

A decoder doesn't not process instructions, it only interprets them and splits it up into smaller instructions. It doesn't make sense to compare number of instructions across different ISAs anyway, ARM has smaller, simpler, and on account of that more numerous instructions. One x86-64 instruction more often than not maps to multiple ARM instructions. It just makes no sense to compare counts.

I sort of stopped reading your comment around here, sorry about that.



Flagship iPhone absolutely crushes any flagship Android phone in benchmarks its not even close. Has been like this for many years.

It is also very common that people that like Android are not aware of this for some reason.


It also basically doesn't matter for 99% of people. Most people just need their phone to be fast enough to run common applications responsively. Benchmarks are irrelevant to the average user, and are mostly a dick measuring contest. People tend to prefer mobile platforms on the basis of features that they offer, practical hardware traits like battery and connectivity, and familiarity.


Completely agree with your sentiment. But a lot of people are used to thinking about performance like it much matters anymore. You’d be hard pressed to find much of a difference between yearly updates of the same model phone. But I think a lot in this community will look at those numbers, I mean that’s how most people grew up when thinking about PCs and seems to be how Android devices have also marketed to people. I think now the majority of people just upgrade to the latest phone of the OS they prefer. I mean personally after deleting social media and a lot of other items off my phone I wonder what I really use my phone for other than texting, taking pictures and checking email and my calendar. I could easily get by with some basic phone (honestly contemplated getting some e ink phone with basic capabilities). But there are plenty of people out there that use their phone constantly and for a host of things. All that to say I feel like phones won’t have another large advance for some time, small iterative changes. That jump from the original iPhone to the iPhone 3G seemed gigantic. Sure there have been form factor changes but even on the android front there have been changes but not things that are revolutionary to the majority of users.


Can't speak for anyone else, but for me, the biggest sticking point is being able to side/load software and other app stores... not that I pull in much, but I don't want my device mfg tell me what I'm allowed to run on what I buy.

Sometimes this means the software is more clunky, less polished and isn't as fast... but I'm still using my now 3yo Pixel 2XL, which is still mostly acceptable (I really need a new battery).


AnTuTu Benchmark smartphones ranking:

Vivo Iqoo 5 Pro 671,218

OnePlus 8 Pro 590,112

iPhone 12 Pro 579,778

As ever, it depends what you're measuring


Camera and screen have nothing to do with CPUs, right? And battery is also compared to size, so difficult to compare.

AFAIK, iPhones do have the fastest benchmarks (and the smoothest experience) across all phones.


There is quite a significant difference between iOS and android devices, there is no android device that competes with the performance of iOS right now. Heck, most laptops don’t even compete with iPads these days.


The whole isn't the fastest, the biggest comes up with big caveats.

For example the iPhones don't have the longest battery, but the phone that has the longest battery has two times the amount of battery, while having a much higher consumption than iPhones. If Apple decides to stick a 5000+ mah battery in an iPhone - Android phones would have no way of catching up to its battery life.

Another big thing being the latest screen tech - Apple can't afford to do an OLED screen on the iPad, since Samsung and LG can't possibly manufacture enough OLED screens to meet demand. Where Samsung is perfectly fine doing it, when they won't sell enough tablets to matter.

In the end it's not that the iPhone is clearly inferior to Androids, it's mostly that Apple has some numbers in mind that they consider optimal and going beyond those won't increase their sales (certain amount of battery, certain screens, etc).


An Android could have amazing hardware but everyone knows the OS isn't going to get updated and you're going to have to pay to replace it in 2-3 years.


> The longest battery isn't on an iphone, the best camera isn't on an iphone, the latest screen tech isn't on an iphone, etc. etc.

It is not on any single Android model either. Fragmentation is a thing.


Lol what are you smoking? iPhone as a whole has trounced Android repeatedly over the last few years. There really is no comparison aside from Android having some [random esoteric] hardware that iPhone doesn't for a particular year. None of this is focused on battery life, performance, and efficiency. It is window dressing on top of an inferior architecture.

Also I fundamentally will not buy a piece of hardware that will cease receiving security updates within 36 months of purchase. That's insane, and frankly a major fuck you to anyone considering buying your product.


What can you do on iphone that can't be done better on an android phone?


Two that leap to mind immediately: Install the latest OS on a device older than a year old. Get money for your old phone when you upgrade.


But those have nothing to do with the supposedly "incredible" chip performance Apple has that Qualcomm doesn't.


Resale value absolutely has a lot to do with the better chip performance. You can't sell a dog-slow phone.


You have reduced the scope of the argument. It is a question of efficiency, performance, and battery life. iPhone beats Android in all over those categories.

Of course iPhone and Android can do all the same tasks. One just does it a lot better efficiency and power wise.


How does iPhone beat Note20 in battery life if the Note lasts longer?

How does the chip performance matter if the Note feels faster in opening apps and scrolling due to the refresh rate?

How does efficiency matter if the Note holds more apps in memory for faster task switching?

How does a faster chip matter if the image processing is way superior in the Note so your photos almost always come out better?

The difference is subjective. I would have expected it to be objective like in the M1 laptop. The new air and pro absolutely obliterate other comparable systems in every metric.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: