Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why? This is diagonal to how LLM's work, and trivially solved by a minimal hybrid front/sub system.


Because, LLMs are touted to be the silver bullet of silver bullets. Built upon world's knowledge, and with the capacity to call upon updated information with agents, they are ought to rival the top programmers 3 days ago.


They might be touted like that but it seems like you don't understand how they work. The example in the article shows that the prompt is limiting the LLM by giving it access to only 2000 tokens and also saying "ONLY OUTPUT ...". This is like me asking you to solve the same problem but forcing you do de-activate half of your brain + forget any programming experience you have. It's just stupid.


> like you don't understand how they work.

I would not make such assumptions.

> The example in the article shows that the prompt is limiting the LLM by giving it access to only 2000 tokens and also saying "ONLY OUTPUT ..."

The site is pretty simple, method is pretty straightforward. If you believe this is unfair, you can always build one yourself.

> It's just stupid.

No, it's a great way of testing things within constraints.


To gauge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: