Hacker Newsnew | past | comments | ask | show | jobs | submit | irjustin's commentslogin

At scale, use weight and supply 1 or 2 extra.

This is how pretty much every IKEA, LEGO, etc works with very small, cheap parts.

End users benefit because it's easy to drop/lose/break one.


So that explains why the smallest parts often have spares in ikea and lego builds. Is this done because of the error in weighing the smallest parts, so they have a margin for error by allowing for an extra 1 or 2?

> Is this done because of the error in weighing the smallest parts, so they have a margin for error by allowing for an extra 1 or 2?

This is a secondary benefit, the primary benefit is if the end user loses/breaks one. That part very well could be show stopper (Ikea 110630 anyone?). Now the end user is stuck - has to call, you have to ship, do you charge? do you give for free? they have to wait. they're annoyed, you're annoyed.

No one is happy.

The supply chain headaches for giving exact number of tiny parts is terribly expensive, relatively speaking. So you give spares because in the long run it's way cheaper.


Just in case anyone is unaware: Lego does in fact ship single pieces for free, if you lose one.

IKEA does too. You can request smaller part you're missing on their website[1]. And if they don't have them available online you can check in with their support, once they shipped one part from two countries away, free of charge (and even thrown an extra one). For bigger parts they sometimes have them in stock at local stores.

[1]https://www.ikea.com/us/en/customer-service/spare-parts/


I was very pleasantly surprised when they sent me free replacement hardware to reassemble an old ikea twin bed model that had been discontinued a number of years ago. I assume they use the same hardware in other models they still sell.

I tried that the other day when when my kid rebuilt a 3 in 1 set. I couldn't justify 7€ shipping for a 10c part so that the baby orca could have it's dorsal fin. My kid didn't care. I was disappointed.

Hmmm, so if I wanted to assemble the lovely Cloud City, all I would need is 697 of my best friends to call in and report that they had lost a different piece...

Lego might be banking on the idea that folks wanting to steal the 697 piece cloud city kit the hard way don't also have ~697~ 696 friends

Just tacking on to mention the smallest parts are most likely to be lost, they’re the ones that - if dropped - seem to bounce and roll under a refrigerator or into the ether. They don’t give extras on the larger parts because they’re not likely to be lost. Frequently enough all it takes is a violent/careless bag opening to send the small pieces flying.

I've often thought about this when assembling Ikea furniture. I have never been shorted. There's got to be someone at Ikea with the job of calculating the target acceptable ratio of over/under supplying small hardware pieces. I figure they can probably give out thousands if not tens of thousands of extra little screws/dowels/plastic bits before it exceeds the cost of missing just one. Between the cost of a support call, maintaining a supply of spare parts, labor and shipping to send out replacements... not to mention the less tangible to calculate loss of reputation to the brand. Quite interesting to think about at scale.

Being aware of this, I am waiting for a solution to what to do with the leftovers besides chuck them into a landfill. The problem, of course, is scale. No one is mailing 3 screws and an Allen wrench anywhere. Maybe once you hit 5 pounds of spare Lego . . .

If you have an IKEA store they do have a place for spares, and you can return them there. Assuming you go back from time to time.

For stuff bought online, e.g. Amazon, not much you can do.


How does this work without dispensing onto the scale one by one? Just shaking them out of a hopper?

You're weighing the bag. Dispense a load in and divide the total weight by the unit weight and you know how many you've put in.

Easier with heavy objects, and needs the variation on weight to be low for the number of items you're dispensing.


Sure, but how do the parts get into the bag?

You grab a "rough amount" and by using weight all you need to do is diff 2,3,4? Ideally 5 and under.

it's very easy to count <=5 visually, but if your package requires 12 nuts, repeatedly counting up to 12 is so stressful the poster built an entire counting machine.


Yes, the question is how exactly you grab a "rough amount"? If you need 4 parts in each bag, is it really much easier to construct a system that can dispense 4-6 parts, than one that can dispense exactly 4?

Sorry i completely missed this. If you don't see it, it's okay - I probalby miss any replys going forward.

Being upfront, I have no idea what I'm talking about. Just some arm chair engineer.

The poster needed 6 parts which is JUST into annoying. My personal thoughts are what they need isn't dispensing but alignment. Thinking deeper I can agree that weight might not the most efficient here.

They're building the aligning and dispensing tool but I argue that's over engineering the problem. If it's aligned it's VERY easy to count 6 via a mark along the track and just push it to the end against your finger and based on the mark you know you have exactly 6.

To me the hardest part to make "just work" is the dispensing, but if you remove that it becomes a much easier problem. There's enough sales volume, you can make a vertical fixture that is a stack of fixed aligning tracks. Your fingers become the dispenser. Sweep and move to the next track.

Just random thoughts.


Or a vibrating seperator which can give perfect counts if needed.

Probably coincidence - general market is up strongly too. Or, too hard to tell anyway.

That is just the part that gets the most press. Having lived here for a while now.

1. At a young age, you're taught to follow the rules.

2. "Someone's always watching". Lots of CCTV. Community reports.

3. Plenty of police who have the ability and time to investigate even the most petty things.

Trust in the system starts with 1 but is really carried day to day by 3.


I'll bite.

> It's not zero knowledge for me then. Also - if there is ANY possibility to track anyone. And/or centrally mark someone "nonverified" then it makes more problems than solves.

> Even if I trust my govt (no way), even if it'd be fully ZK with no way to track anyone… still govt would have a way to just block some individual "because".

Is this even actually possible? If you want any sort of identity verification you HAVE to trust someone, whether age or full ID. Literally impossible.

Zero trust systems in society don't work. If you don't care "who" then yes, zero trust is just fine... but then what's the point of "age verification"?


The whole point is that mandating websites to require age verification is more authoritarian than people are pretending it is.


I was more responding to the part about not trusting your own gov cuz how do you build a system where you don't trust a central authority when identity is required.

I don't think it's possible.


You have to trust someone to verify age.

You don't have to trust somebody not to track how the resulting credential is used. And that is what "zero knowledge" means. It means that after you finish the protocol, nobody has learned anything but what they were supposed to learn (in this case, "the person at the other end of this connection is over 18"). If it leaks anything else about the person, it's not zero knowledge. If somebody learns which of the issued credentials was used, it's not zero knowledge. If parties can collude to get information they're not supposed to get, it's not zero knowledge.

It's a technical term of art, not some politician's bullshit. And it isn't complicated to understand.


> why we would want to detect Alzheimer’s

At a personal level, I've been through this with my grandfather.

I want to know. My family wants to know. I want to prepare because there are things I want to do today that I know I won't be able to do in the future.

In many ways, it's just like many terminal cancer diagnoses. You're going to lose that person, but you have some time.


But it is a wildly variated, almost meaningless diagnosis. 3 of my 4 grandparents got Alzheimer's diagnosis as well as my mom and mother-in-law. The variation of progression and symptoms is so wide that it really seems like a catch-all. One grandmother was fine until about 72 and in 2 years forgot who people were and 4 years had lost all executive function and passed away. The other one was diagnosed in her early 80s and lived to be 96 with no major progression, like slightly more repeating, but never forgetting people or not knowing how to talk etc. Similar dichotomy between my mother and mother-in-law but with considerably different presentations of symptoms.

It's a weird disease and IMO not even really a disease it's a bunch of different causes of cognitive impairment under one umbrella but shouldn't be separated out much further to find actual causes and treatments.


Literally - this is my friend =(

For him, it was something like a ~$30m pizza at peak. We're asked not to bring it up.


If he didn't buy a pizza with it and held the btc, he probably would have just sold it when he could make $1000, or $10000 with it.


While likely true, that's not how our brains work.


FWIW, these studies are too early. Large orgs have very sensitive data privacy considerations and they're only right now going through the evaluation cycles.

Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.

To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.


> Rollout hasn't even begun yet which you can

If rollout at Deloitte has not yet begun... How on earth did this clusterfuck [0] happen?

> Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.

[0] https://fortune.com/2025/10/07/deloitte-ai-australia-governm...


Because even if an organisation hasn't rolled out generative AI tools and policies centrally yet, individuals might just use their personal plans anyway (potentially in violation with their contract)? I believe that's called "shadow AI".


Correct. Where I work we are only "allowed" to use AI since December 2025.

But obviously people were copy/pasting content to ChatGPT and Claude long before that.


Its a $400+k report to a government, where the references either weren't audited, or only audited by the AI system that regurgitated them.

That requires more than a single person's involvement.


Haven't even read the source, but I like how it's "a partial refund". The chutzpah to deliver absolute nonsense[0] and then give a partial refund!

[0]: If it contains references to nonexistent papers and fabricated quotes, the conclusions of the report are highly doubtful at best.


290 out of 440 isn't that bad, when Deloitte claim the AI stuff is only secondary supporting claims and had zero effect, and they did hand over a corrected report (with real references) without asking for more.

... And they did also get blacklisted for the next report.


Exactly, my company started carefully dipping their toes in to org wide AI mid last year (IT has been experimenting earlier than that, but under pretty strict guidelines from infosec). There is so much compliance and data privacy considerations involved.

And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.


I'm not sure this is even measuring LLMs in the first place! They say the definition is "big data analytics and AI".

Is putting Google Analytics onto your website and pulling a report 'big data analytics'...?


Meanwhile, "shadow" AI use is around 90%. And if you guess IT would lead the pack on that, you are wrong. It's actually sales and hr that are the most avid unsactioned AI tool users.


Agreed. We've been on the agentic coding roller coaster for only about 9-10 months. It only got properly usable on larger repositories around 3-4 months ago. There are a lot of early adopters, grass roots adoption, etc. But it's really still very early days. Most large companies are still running exactly like they always have. Many smaller companies are worse and years/decades behind on modernizing their operations.

We sell SAAS software to SMEs in Germany. Forget AI, these guys are stuck in the last century when it comes to software. A lot of paper based processes. Cloud is mainly something that comes up in weather predictions for them. These companies don't have budget for a lot of things. The notion that they'll overnight switch to being AI driven companies is arguably more than a bit naive. It indicates a lack of understanding of how the real world works.

There are a lot of highly specialized niche companies that manufacture things that are part of very complex supply chains. The transition will take decades, not months/weeks. They run on demand for products they specialize in making. Their revenue is driven by demand for that stuff and their ability to make and ship it. There are a lot of aspects about how they operate that are definitely not optimal and could be optimized. And AI provides plenty of additional potential to do something about it. But it's not like they were short of opportunities to do so. It takes more than shiny new tools for these companies to move. Change is invasive and disruptive for these companies. And costly. They take the slow and careful perspective to change.

There's a clean split between people that are AI clued in and people working in these companies. The Venn diagram has almost no overlap. It's a huge business opportunity for people that are clued in: a rapidly growing amount of people mainly active in software development. Helping the people on the other side of the diagram is what they'll be mostly doing going forward. There's going to be a huge demand for building AI based stuff for these people. It's not a zero sum game, the amount of new work will dwarf the amount of lost work.

Some of that change is going to be painful. We all have to rethink what we do and re-align our plans in life around that. I'm a programmer. Or I was one until recently. Now I'm a software builder. I still cause software to come into existence. A lot of software actually. But I'm not artisanally coding most of it anymore.


Looking at the study, +4% is what they get when they chose to adopt AI, not overall.


I think people want to read how AI is not working , so those are the articles that are going to get traction.

Personally, I don't think the current frontier models would help the company I work for all that much. The company exists because of the skill in networking and human friendships. The company exist in spite of technological incompetence.

At some level of ability though, a threshold will be reached and a competitor will eat our lunch whole by building a new business around this future model.

It is not going to be a % more productive than our business. It is like the opposite of 0 to 1. The company I work for will go from 1 to zero really quick because we simply won't be able to compete on anything besides those network ties. Those ties will break fast if every other dimension of the business is not even competitive and really in a different category.


Yes I was recently talking to a person who was working as a BA who specializes in corporate AI adoption- they didn’t realize you could post screenshots to ChatGPT

These are not the openclaw folks


What does it even mean to specialise in something and know so little about it? What exactly is this BA person doing?

Genuinely confused, I don't get it


The “corporate” in “corporate AI” can mean tons of work building metrics decks, collecting pain points from users, negotiating with vendors…none of which requires you to understand the actual tool capabilities. For a big company with enough of a push behind it, that’s probably a whole team, none of whom know what they are actually promoting very well.

It’s good money if you can live with yourself, and a mortgage and tuitions make it easy to ignore what you are becoming. I lived that for a few years and then jumped off that train.


Sounds like a perfect job for AI!


What do you mean? Deloitte has been all in on Microsoft AI offerings for quite some time, people have access to a lot of AI tools through MS.


Did they communicate this from the top or just turn a blind eye to it?


They had official trainings on how to use Copilot/ChatGPT and some other tools, security and safety trainings and so on, this is not some people deciding to use whatever feature was there from Ms by default.


As a counter-point, someone from SAP in Walldorf told me they have access to all models by all companies to their choosing, at a more or less unlimited rate. Don't quote me on that, though, maybe I misunderstood him, it was in a private conversation. Anyway, it sounded like they're using AI heavily.


Yeah. We are only just beginning to get the most out of the internet, and the WWW was invented almost 40 years ago - other parts of it even earlier. Adoption takes time, not to speak of the fact that the technology itself is still developing quickly and might see more and more use cases when it gets better.


> We are only just beginning to get the most out of the internet

The Internet has been getting worse pretty steadily for 20 years now


Its worse but we are more dependent on it than ever.


> We are only just beginning to get the most out of the internet

"The Internet" is completely dead. Both as an idea and as a practical implementation.

No, Google/Meta/Netflix is not the "world wide web", they're a new iteration of AOL and CompuServe.


OpenAI is buying up like half of the RAM production in the world, presumably on the basis of how great the productivity boost is, so from that perspective this doesn't seem any more premature than the OpenAI scaling plan. And the OpenAI scaling plan is like all the growth in the US economy...


4% isn’t failure! A 4% increase in global GDP would be a big deal (more than what we get in a whole year of progress); and AI adoptionis only just getting started.


If we're being strict to peons, then no, that's the human's voice - https://www.youtube.com/watch?v=5r06heQ5HsI.


Hearthsone...


At these scales, financial and social are very intertwined, it's both.


In broad strokes - disagree.

This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.


> Just because you can cook with a hammer doesn't make it its purpose.

If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.

If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.


I do mean this is a pretty piss poor example.

Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.

To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.


>that is more than enough motivation for our more conservative congress members to ban the internet in the first place

Yes, and now porn is highly regulated. Maybe that's a hint?


Email volume is mostly robots fighting robots these days.

Today if we could survey AI contact with humans, I'm afraid the top 3 by a wide margin would be scams, cheating, deep fakes, and porn.


Is it possible that these are in the top 10, but not the top 5? I'm pretty sure programming, email/meeting summaries, cheating on homework, random QA, and maybe roleplay/chat are the most popular uses.


The number of programmers in the world is vastly outnumbered by the people that do not program. Email / meeting summaries: maybe. Cheating on homework: maybe not your best example.


I was going to reply to the post above but you said it perfectly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: